
Kapalı
İlan edilme:
Teslimde ödenir
My current Node.js service ingests roughly one million rows from a CSV/Excel file into PostgreSQL, yet the upload still drags on for far too long. The data source is explicitly a “CSV/Excel file,” and the necessary transformations are, as I noted, “That is already implemented.” What I have not been able to confirm is whether the pipeline truly processes data in discrete batches or pushes everything through at once—it might even be a mix, which could be part of the slowdown. I need the import stage streamlined so the full dataset lands in Postgres dramatically faster while keeping data integrity intact and the existing transformation logic untouched. Please analyse the current implementation, profile the bottlenecks, and then update the ingestion flow—whether that means switching to COPY, streaming the file in controlled chunks, tuning connection pooling, or any other sound technique you deem fit in Node.js and PostgreSQL. I’ll consider the work complete when a fresh run with the same file loads reliably in a fraction of the current time, memory usage remains stable, and the rest of the application codebase operates exactly as before.
Proje No: 40075227
17 teklifler
Uzaktan proje
Son aktiviteden bu yana geçen zaman 1 ay önce
Bütçenizi ve zaman çerçevenizi belirleyin
Çalışmanız için ödeme alın
Teklifinizin ana hatlarını belirleyin
Kaydolmak ve işlere teklif vermek ücretsizdir
17 freelancer bu proje için ortalama ₹20.794 INR teklif veriyor

Hello, I can quickly analyze your current Node.js ingestion flow, profile where the slowdown occurs, and refactor only the import stage to drastically improve performance—using techniques like PostgreSQL COPY, true streaming/batching, and connection tuning—without touching your existing transformation logic. I’ve optimized large-scale CSV/Excel → Postgres pipelines before (1M+ rows) with major speed gains and stable memory usage. Would you like me to start by benchmarking the current load time and memory profile to pinpoint the exact bottleneck first?
₹22.000 INR 7 gün içinde
3,4
3,4

Hi, I’m Malix Azis, a backend developer with strong hands-on experience optimizing high-volume data ingestion pipelines in Node.js where performance, memory stability, and data integrity are non-negotiable ✨ I’ve worked on services processing millions of rows from CSV and Excel sources into PostgreSQL and understand how subtle issues—like mixed batching strategies, inefficient row inserts, or poor stream backpressure—can quietly destroy performance ⚙️ Your requirement to keep all existing transformation logic untouched while dramatically speeding up the import is very clear, and my approach would be to first profile the current flow end-to-end, then isolate whether the slowdown comes from parsing, batching, DB writes, or connection handling ⭐ From there, I’d refactor only the ingestion layer using proven techniques such as PostgreSQL COPY with streams, controlled chunking, transaction tuning, and pool optimization, ensuring memory usage stays flat and the rest of the codebase behaves exactly the same ✨ I focus on measurable results, so success would be validated by repeatable test runs showing significant time reduction with no regressions ⚙️ Do you already log per-stage timings (parse, transform, insert), or should profiling hooks be added temporarily to pinpoint the exact bottleneck before refactoring? ⭐ Thanks for reviewing my proposal, and I look forward to working with you — have a great day! ?
₹15.000 INR 3 gün içinde
2,9
2,9

Dear Hiring Manager, I am a full-stack developer with extensive experience optimizing Node.js services and PostgreSQL pipelines. I understand your import process is already handling transformations, but large CSV/Excel uploads (~1M rows) are slow. I can analyze the current implementation, pinpoint bottlenecks, and optimize the ingestion pipeline without altering your transformation logic. Approach & Way of Working: Profiling & Bottleneck Analysis: Review the existing code to see whether data is processed in batches or streamed inefficiently, and identify memory or connection bottlenecks. Optimized Ingestion Flow: Implement techniques such as PostgreSQL COPY for bulk inserts, controlled streaming of CSV/Excel data, and efficient connection pooling. All changes will maintain data integrity and existing transformations. Resource Management: Ensure memory usage remains stable during import and CPU usage is optimized for Node.js asynchronous operations. Testing & Validation: Run the optimized pipeline with your sample dataset, ensuring faster ingestion times, identical output, and zero impact on other parts of the application. Documentation & Handover: Provide a brief explanation of changes, best practices for future large-file imports, and guidance for maintenance. Best regards,
₹25.000 INR 7 gün içinde
1,2
1,2

Dear sir/madam, I am offering my services on short notice. Relevant Skills and Experience Please consider me and give me a chance to impress you by my quality services
₹25.000 INR 7 gün içinde
0,0
0,0

Hi, I am an IIT Grad. I will make it a reality for you. I'm intrigued by the prospect of tackling the performance bottleneck in your Postgres CSV ingestion process. With Node. Kindly click on the chat button so we can discuss and get started. Will share you my prior projects done and my resume too. I have been doing freelancing since 2019 worked at top MNCs in both USA and India. Lets connect
₹12.500 INR 7 gün içinde
0,0
0,0

We've recently finished a project just like this where we helped someone bring their creative vision to life. I can do the same for you by shaping a design that fits your style and goals without overcomplicating the process. You won't find someone better aligned with what you're looking for. I paid close attention to your focus on a clean and professional result. I enjoy creating work that feels user friendly and polished, and I have the skills to deliver something that fits smoothly into your overall vision. I will analyze your current ingestion pipeline to identify bottlenecks, then optimize the flow using Node.js and PostgreSQL best practices such as streaming in batches and leveraging efficient COPY commands without altering your transformation logic. This will speed up your import drastically while keeping memory stable and data integrity intact. I'd love to chat about your project! The worst that can happen is you walk away with a free consultation. Regards, Danie.
₹12.500 INR 7 gün içinde
0,0
0,0

Hello, I understand the issue: ingesting ~1M CSV/Excel rows into PostgreSQL is slow, likely due to row-by-row inserts, suboptimal batching, or memory-heavy processing. I can profile the current Node.js ingestion flow, confirm whether data is streamed or buffered, and optimize it using PostgreSQL COPY, controlled chunk streaming, and proper connection pooling—without touching your existing transformation logic. The goal will be significantly faster load time with stable memory usage and identical output. I’m happy to review the current implementation and suggest the fastest safe approach.
₹18.000 INR 7 gün içinde
0,0
0,0

I can dramatically reduce your ingestion time for 1 million rows by implementing PostgreSQL Binary COPY and Node.js Streams. Based on your description, the current slowdown is likely due to the overhead of individual INSERT statements or unmanaged memory during the transformation phase. I would like to do this task for the amount of 25000 INR.
₹25.000 INR 7 gün içinde
0,0
0,0

I am interested in this part-time opportunity. I have good experience in data entry and online work, with strong typing speed, I am interested in this project. I have experience in data entry and online tasks with high accuracy and fast typing skills. I am reliable, detail-oriented, and can complete work on time. Ready to start immediately.
₹18.000 INR 7 gün içinde
0,0
0,0

I’ve worked on Node.js services that ingest large CSV/Excel datasets into PostgreSQL (1M+ rows). From your description, the performance issue is very likely in the ingestion path rather than the transformation logic itself. The first thing I’ll verify is how data is currently being processed: Whether rows are inserted one-by-one, partially batched, or fully buffered How transactions and connection pooling are handled Whether the file is streamed or loaded entirely into memory Even with batching, repeated INSERTs or ORM-based writes can become a major bottleneck at this scale. What I’ll do: Review and profile the existing ingestion flow to identify the real bottleneck (I/O, transforms, DB writes, or memory usage) Keep your transformation logic exactly as it is Optimize the import stage using PostgreSQL-native approaches such as COPY FROM STDIN Stream CSV/Excel data in controlled chunks to keep memory stable Tune batch sizes, transactions, and pooling for higher throughput Ensure data integrity and correctness after import Done criteria: The same file loads in a fraction of the current time Memory usage remains stable during the run No behavior changes elsewhere in the application At this scale, a proper streaming + COPY-based pipeline is usually orders of magnitude faster than generic inserts. The goal is a clean, maintainable solution that PostgreSQL is designed to handle. Happy to review the current implementation and profile a real run.
₹21.000 INR 10 gün içinde
0,0
0,0

Hey Sami S., Good morning! I’ve carefully checked your requirements and really interested in this job. I’m full stack node.js developer working at large-scale apps as a lead developer with U.S. and European teams. I’m offering best quality and highest performance at lowest price. I can complete your project on time and your will experience great satisfaction with me. I’m well versed in React/Redux, Angular JS, Node JS, Ruby on Rails, html/css as well as javascript and jquery. I have rich experienced in PostgreSQL, Performance Tuning, Hadoop, Excel, Data Processing, Node.js, NoSQL Couch & Mongo and Elasticsearch. For more information about me, please refer to my portfolios. I’m ready to discuss your project and start immediately. Looking forward to hearing you back and discussing all details.. Your Sincerely
₹12.500 INR 5 gün içinde
0,0
0,0

I can streamline your existing import pipeline to load ~1M rows dramatically faster without touching your current transformation logic or impacting the rest of the codebase. Approach Profiling & Analysis Review current ingestion flow to confirm batch vs full-load behavior Profile CPU, memory, I/O, and DB latency to identify bottlenecks Validate transaction boundaries and pool usage Optimized Ingestion Replace row-by-row inserts with PostgreSQL COPY (FROM STDIN) where feasible Implement streaming ingestion (CSV/Excel → transform → COPY) to keep memory stable If COPY isn’t viable for all cases, use controlled batch inserts with prepared statements Tune connection pooling, transaction size, and commit strategy Optional: temporary index/constraint handling during load (safe + reversible) Integrity & Compatibility Preserve all existing transformations as-is Maintain data integrity (constraints, error handling, rollback strategy) No breaking changes to application behavior Deliverables Refactored ingestion module (drop-in replacement) Performance benchmarks (before/after) Memory usage report Configuration notes (pool size, batch size) Outcome Load time reduced to a fraction of current runtime Stable memory via streaming Reliable, repeatable imports at scale Timeline 2–4 days depending on current implementation complexity Ready to start immediately.
₹25.000 INR 3 gün içinde
0,0
0,0

Hello, I have a similar kind of optimisation in past to ingest data from CSV file into postgres using nodejs script. I can take a shot at it. I would need to know how many tables are involved, are there any constraints on the tables etc. Hope to hear from you. Regards, Abhishek Kr. Singh
₹13.000 INR 7 gün içinde
0,0
0,0

Hello, I have 5+ years of experience working with PostgreSQL performance tuning and large data ingestion pipelines. I understand that your Node.js service currently loads ~1M rows from CSV/Excel but is slow despite transformations already being implemented. I can analyse the existing ingestion flow, identify whether batching or full in-memory processing is causing the bottleneck, and optimize the import using PostgreSQL best practices such as COPY, controlled chunk streaming, and connection pooling—without altering your transformation logic or application behavior. The goal will be faster load time, stable memory usage, and identical functional output. I’m available to start immediately. Regards, Anjali
₹18.000 INR 7 gün içinde
0,0
0,0

navi mumbai, India
Ödeme yöntemi onaylandı
Oca 29, 2018 tarihinden bu yana üye
₹1500-12500 INR
₹600-1500 INR
₹1500-12500 INR
₹12500-37500 INR
₹1500-12500 INR
₹600-1500 INR
$250-750 USD
$10-200 USD
₹750-1250 INR / saat
₹12500-37500 INR
$2-8 USD / saat
$1500-3000 USD
€30-250 EUR
$2-8 USD / saat
₹1500-12500 INR
$250-750 USD
$15-25 USD / saat
€250-750 EUR
₹750-1250 INR / saat
$2-8 USD / saat
₹12500-37500 INR
$750-1500 USD
$25-50 USD / saat
$250-750 USD
$30-250 USD