About Us

About Eva:


Eva is a results-driven technology & Amazon services provider that helps eCommerce brands, resellers, and agencies maximize their growth while reducing operational costs. Eva combines proprietary optimization & intelligence software with Amazon management and 3PL services to maximize our client’s visibility, efficiency, and profitability of their Amazon operations.


Eva primarily focuses on Pricing, Advertising, and Inventory Management on Amazon. 90% of Eva’s customers are in the US.


Eva has a bottom culture, where we expect each individual to drive their own business, and make sure they apply Eva's principles to every single task. Eva's Principles are Customer Obsession, Ownership, Learn and Simplify, be a Challenger, and Deliver Results.


Eva Is Looking For a Big Data Engineer To Join Our Growing Team!


Are you an experienced big Data Engineer and looking for a new challenge and an opportunity to advance your career?


If you are an energetic, highly-motivated individual with a can-do attitude who enjoys networking and establishing relationships, we have the perfect job for you!


Big Data Engineer is responsible for designing, developing, and managing large-scale data processing systems and infrastructure to handle and analyze massive volumes of data, commonly referred to as "big data." And also responsible for developing and implementing scalable, efficient, and reliable data pipelines and workflows that collect store, process, and analyze data from various sources such as databases, data lakes, data warehouses, and streaming data.



Duties/Responsibilities:


  • Designing the architecture of a big data platform,
  • Maintaining data pipeline,
  • Customizing and managing integration tools, databases, warehouses, and analytical systems,
  • Managing and structuring data,
  • Setting up data-access tools for the R&D team,
  • Developing and maintaining data pipelines using ETL processes,
  • Working closely with the data science team to implement data analytics pipelines,
  • Helping define data governance policies and support data-versioning processes,


Required Skills/Abilities:


  • Bachelor’s degree from Universities preferably in Computer Engineering, Software Engineering, or other quantitative fields,
  • 3 + years of experience in doing complex quantitative analysis to help guide decisions, preferably in the software development industry,
  • Hands-on experience with AWS Big Data Services like S3, Glue, Athena, EMR, Kinesis, DynamoDB, CloudSearch, and other related services,
  • Experience in writing and optimizing big data queries using tools like SQL, HiveQL, and SparkSQL,
  • Experience in programming languages like NodeJs, Python, Java, and Scala,
  • Fluent in English, both written and spoken,
  • Presentation skills – ability to write and speak clearly to easily communicate complex ideas in a way that is easy to understand,
  • A good communicator with effective stakeholder management & conflict resolution skills,