New post about how to write data from a Apache Spark DataFrame into a Elasticsearch/Opensearch database #datascience #databricks #elasticsearch #opensearch #bigdata #apachespark #spark #tech #programming #python:
https://pedro-faria.netlify.app/posts/2025/2025-03-16-spark-elasticsearch/en/
Recession rut: The major New Zealand companies doing it tough https://www.byteseu.com/785792/ #among #biggest #Building #Companies #corporates #doing #financial #fletcher #from #IT #major #new #NewZealand #others #recession #reeling #results #rut #show #skycity #some #spark #Television #the #tough #zealand
I know this is a long shot but does anybody know how to set a "secondary role" (or activate all secondary roles) in #Snowflake via the #Spark connector? I'm going to note that a lot of things that seem like they should work don't, so I'd be grateful for ideas from folks who are in a position to actually test this.
New call for proposals: #Spark enables researchers from all disciplines to test or develop novel and unconventional scientific approaches, methods, theories or ideas within a short time.
Submission deadline: 4 March 2025.
https://www.snf.ch/en/CVNR0Q5f3P32Cg9f/news/spark-call-for-proposals
Just caught up with the recent Delta Lake webinar,
> Revolutionizing Delta Lake workflows on AWS Lambda with Polars, DuckDB, Daft & Rust
Some interesting hints there regarding lightweight processing of big-ish data. Easy to relate to any other framework instead of Lambda, e.g. #ApacheAirflow tasks
This is a customer-facing role, so if that's not your thing, keep scrolling.
TLDR: If you know Hadoop and live close enough to Belfast to commute, you should apply.
I've posted this before, but it's been a little while #fedihire. Also, adding some additional information this time. This is my team. We are already on three continents and 6 timezones, but #Belfast is a new location for the team. I know literally nothing about the office.
I know a lot of places Hadoop is the past, and sure we see a ton of #Spark (I do not understand why that is not listed in the job description but maybe because they want to emphasis that we need hadoop expertise?). You can see all the projects we support at https://www.openlogic.com/supported-technology
It depends on how you count, as I was on two teams during tradition, but I've been on this team for over 5 years now. It's a great team. I've been with the company now right at 7 years. I cannot say how we compare to Belfast employers but this is well more than double where I have stayed at any other employer (even if you count UNC-CH as a single employer rather than the different departments, I've beat them by well over a year at this point).
My manager has been on this team for almost 15 years. His manager has been with this team for almost as long as me, but with the company much longer. His manager has been here almost as long as me (I actually did orientation with him). His manager is a her and she's been here almost as long as me. So, obviously, this is a place where people want to stay!
Our team has a lot of testosterone, but when I started, our CEO was a woman. The GM for the division is a woman.
My manager is black. The manager of our sister team is black.
I think you'll find our team and company is concerned about your work product and not how you dress, what bathroom you use, or the color of your skin.
If you take a look at our careers page, you'll see this:
Work Should Be Fun
There’s always something to look forward to as a Perforce employee: scavenger hunts, community lunches, summer events, virtual games, and year-end celebrations just to name a few.
We take that shit seriously. Nauseatingly so sometimes, lol.
Actually, we take everything on the careers page seriously, but I know from experience that some places treat support like they are a shoe sole to be worn down. Not so here. It's not all rainbows and sunshine, of course. The whole point is that the customer is having an issue! Our customers treat us with respect because management demands that they do.
------
The Director of Product Development at Perforce is searching for a Enterprise Architect (#BigData Solutions) to join the team. We are looking for an individual who loves data solutions, views technology as a lifestyle, and has a passion for open source software. In this position, you’ll get hands on experience building, configuring, deploying, and troubleshooting our big data solutions, and you’ll contribute to our most strategic product offerings.
At OpenLogic we do #opensource right, and our people make it happen. We provide the technical expertise required for maintaining healthy implementations of hundreds of integrated open source software packages. If your skills meet any of the specs below, now is the time to apply to be a part of our passionate team.
Responsibilities:
Troubleshoot and conduct root cause analysis on enterprise scale big data systems operated by third-party clients. Assisting them in resolving complex issues in mission critical environments.
Install, configure, validate, and monitor a bundle of open source packages that deliver a cohesive world class big data solution.
Evaluate existing Big Data systems operated by third-party clients and identify areas for improvement.
Administer automation for provisioning and updating our big data distribution.
Requirements:
Demonstrable proficiency in #Linux command-line essentials
Strong #SQL and #NoSQL background required
Demonstrable experience designing or testing disaster recovery plans, including backup and recovery
Must have a firm understanding of the #Hadoop ecosystem, including the various open source packages that contribute to a broader solution, as well as an appreciation for the turmoil and turf wars among vendors in the space
Must understand the unique use cases and requirements for platform specific deployments, including on-premises vs cloud vs hybrid, as well as bare metal vs virtualization
Demonstrable experience in one or more cloud-based technologies (AWS or Azure preferred)
Experience with #virtualization and #containerization at scale
Experience creating architectural blueprints and best practices for Hadoop implementations
Some programming experience required
#Database administration experience very desirable
Experience working in enterprise/carrier production environments
Understanding of #DevOps and automation concepts
#Ansible playbook development very desirable
Experience with #Git-based version control
Be flexible and willing to support occasional after-hours and weekend work
Experience working with a geographically dispersed virtual team
https://jobs.lever.co/perforce/479dfdd6-6e76-4651-9ddb-c4b652ab7b74
At #SocialScience #Research Park #SPARK #SBARC @cardiffuni @Cwmpas_Coop @WISERDNews #Cardiff
Interesting discussions on #DigitalInclusion #AI #DataCooperatives #Wales #BasqueCountry #SocialEconomy
Day 4 of 12: Understanding key terms for data professionals
As more and more data is generated, we need technologies to process it efficiently. Companies also want to be able to process data in (near) real time. This is where tools such as Spark or Kafka (Big Data Technologies) come into play.
Today's Small Practical Project:
Develop a small pipeline with Python that simulates, processes and saves real-time data: For example, simulate real-time data streams of temperature values. Then check whether the temperature exceeds a critical threshold value. As an extension, you can plot the temperature data in real time.
We’re thrilled to announce the release of orbital 0.3.0!
The orbital package allows you to run predictions from tidymodels workflows directly inside databases. This new version brings support for classification models and the `augment()` function.
Read more in the tidyverse blog: https://www.tidyverse.org/blog/2025/01/orbital-0-3-0/
AI-Powered @github Spark lets you build apps using natural language
https://www.admin-magazine.com/News/AI-Powered-GitHub-Spark-Released-for-Creating-Micro-Apps
#GitHub #Spark #AI #OpenSource #NaturalLanguage #apps #FOSS #ArtificialIntelligence
Non solo #GitHub #Copilot integrerà tutti i #LLM più potenti, che gli sviluppatori potranno scegliere anche in base alle attività (#Claude 3.5 Sonnet di #Anthropic, #Gemini 1.5 Pro di #Google, #GPT4o e #o1 di #OpenAI)..
Ma hanno presentato anche #Spark: uno strumento per creare applicazioni interamente in linguaggio naturale.
Gli "spark" sono micro app completamente funzionali che possono integrare funzionalità AI e fonti di dati esterne.
https://github.blog/news-insights/product-news/bringing-developer-choice-to-copilot/
In today’s #data-driven world, having #inhouse, #outsourced, or #dedicated data specialists is crucial for making informed business decisions.
Whether you need an in-house team or an outsourced data engineering service, skilled professionals ensure that your business can collect, process, and extract valuable insights from data, enabling better decision-making, operational efficiency, and competitive advantage.
Info: https://www.ibm.com/think/topics/data-engineering
Tomorrow will be the 10 month anniversary for the open-source book "Introduction to pyspark"
.
This is an open and introductory book for the Python API of Apache Spark (pyspark).
Link to book project: https://github.com/pedropark99/Introd-pyspark
@Python @pythonhub #pyspark #python #community #datascience #data #spark #apache #bigdata #programming #book #tech #technology
#metal guy tries #guitar amp ai and searches for sexy manboobs crunch #spark
https://youtube.com/watch?v=E7qHqFLfYnU&si=FGBrcoX-jdjVH95t
Spark.
Now scanned, Acrylics on... pretends-to-be-wood-but-is-actually-just-pressed-cardboard. :D