Running pd.read_csv

Hi
I’m using JupyterLab. I did some quick programming courses in Python. Based on what I have learned, I did the coding as mentioned below:
import pandas as pd
import numpy as np
data =pd.read_csv(C://Users/hp/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv)
I am getting the following error:
Cell In[3], line 3
data =pd.read_csv(C://Users/hp/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv)
^
SyntaxError: invalid syntax

I am unable to figure out the syntax error and edit the code, need some help me with this.
Let me know if anyone can assist me.

wrap the path in quotation marks.

"C://andsoonandsoforth"

this has nothing to do with jupyterlab though :slight_smile:

1 Like

To expand on this, keep in mind that Jupyter can run code for a lot languages depending on the kernel used. You can always tell if your post belongs here by running it as normal Python code in the Python interpreter/console or as a traditional Python code from the command line. If you get the same result in one of those places then the problem isn’t pertinent to the Jupyter Discourse Forum. Once you get a little experience, you can do that as a thought experiment to better help you sort where your problem lies.

Hi spookster and fomightez,
Thanks for your replies, I have downloaded Anaconda and installed JupyterLab on my laptop and I open JupyterLab through Anaconda interface http://localhost:8888/lab/tree/Untitled.ipynb.

Though I did some Python courses on Codefinity, we were not taught how to use our own csv files but through preloaded demo files.

So when I run the below mentioned code with wrapped quotation marks as mentioned below:
import pandas as pd
import numpy as np
data =pd.read_csv(“C://desktop/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv”)
I am getting a very long error msg as below:
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 3
1 import pandas as pd
2 import numpy as np
----> 3 data =pd.read_csv(“C://desktop/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv”)

File ~\anaconda3\Lib\site-packages\pandas\io\parsers\readers.py:912, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
899 kwds_defaults = _refine_defaults_read(
900 dialect,
901 delimiter,
(…)
908 dtype_backend=dtype_backend,
909 )
910 kwds.update(kwds_defaults)
→ 912 return _read(filepath_or_buffer, kwds)

File ~\anaconda3\Lib\site-packages\pandas\io\parsers\readers.py:577, in _read(filepath_or_buffer, kwds)
574 _validate_names(kwds.get(“names”, None))
576 # Create the parser.
→ 577 parser = TextFileReader(filepath_or_buffer, **kwds)
579 if chunksize or iterator:
580 return parser

File ~\anaconda3\Lib\site-packages\pandas\io\parsers\readers.py:1407, in TextFileReader.init(self, f, engine, **kwds)
1404 self.options[“has_index_names”] = kwds[“has_index_names”]
1406 self.handles: IOHandles | None = None
→ 1407 self._engine = self._make_engine(f, self.engine)

File ~\anaconda3\Lib\site-packages\pandas\io\parsers\readers.py:1661, in TextFileReader._make_engine(self, f, engine)
1659 if “b” not in mode:
1660 mode += “b”
→ 1661 self.handles = get_handle(
1662 f,
1663 mode,
1664 encoding=self.options.get(“encoding”, None),
1665 compression=self.options.get(“compression”, None),
1666 memory_map=self.options.get(“memory_map”, False),
1667 is_text=is_text,
1668 errors=self.options.get(“encoding_errors”, “strict”),
1669 storage_options=self.options.get(“storage_options”, None),
1670 )
1671 assert self.handles is not None
1672 f = self.handles.handle

File ~\anaconda3\Lib\site-packages\pandas\io\common.py:716, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
713 codecs.lookup_error(errors)
715 # open URLs
→ 716 ioargs = _get_filepath_or_buffer(
717 path_or_buf,
718 encoding=encoding,
719 compression=compression,
720 mode=mode,
721 storage_options=storage_options,
722 )
724 handle = ioargs.filepath_or_buffer
725 handles: list[BaseBuffer]

File ~\anaconda3\Lib\site-packages\pandas\io\common.py:416, in _get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode, storage_options)
411 pass
413 try:
414 file_obj = fsspec.open(
415 filepath_or_buffer, mode=fsspec_mode, **(storage_options or {})
→ 416 ).open()
417 # GH 34626 Reads from Public Buckets without Credentials needs anon=True
418 except tuple(err_types_to_retry_with_anon):

File ~\anaconda3\Lib\site-packages\fsspec\core.py:134, in OpenFile.open(self)
127 def open(self):
128 “”“Materialise this as a real open file without context
129
130 The OpenFile object should be explicitly closed to avoid enclosed file
131 instances persisting. You must, therefore, keep a reference to the OpenFile
132 during the life of the file-like it generates.
133 “””
→ 134 return self.enter()

File ~\anaconda3\Lib\site-packages\fsspec\core.py:102, in OpenFile.enter(self)
99 def enter(self):
100 mode = self.mode.replace(“t”, “”).replace(“b”, “”) + “b”
→ 102 f = self.fs.open(self.path, mode=mode)
104 self.fobjects = [f]
106 if self.compression is not None:

File ~\anaconda3\Lib\site-packages\fsspec\spec.py:1154, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1152 else:
1153 ac = kwargs.pop(“autocommit”, not self._intrans)
→ 1154 f = self._open(
1155 path,
1156 mode=mode,
1157 block_size=block_size,
1158 autocommit=ac,
1159 cache_options=cache_options,
1160 **kwargs,
1161 )
1162 if compression is not None:
1163 from fsspec.compression import compr

File ~\anaconda3\Lib\site-packages\fsspec\implementations\local.py:183, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
181 if self.auto_mkdir and “w” in mode:
182 self.makedirs(self._parent(path), exist_ok=True)
→ 183 return LocalFileOpener(path, mode, fs=self, **kwargs)

File ~\anaconda3\Lib\site-packages\fsspec\implementations\local.py:287, in LocalFileOpener.init(self, path, mode, autocommit, fs, compression, **kwargs)
285 self.compression = get_compression(path, compression)
286 self.blocksize = io.DEFAULT_BUFFER_SIZE
→ 287 self._open()

File ~\anaconda3\Lib\site-packages\fsspec\implementations\local.py:292, in LocalFileOpener._open(self)
290 if self.f is None or self.f.closed:
291 if self.autocommit or “w” not in self.mode:
→ 292 self.f = open(self.path, mode=self.mode)
293 if self.compression:
294 compress = compr[self.compression]

FileNotFoundError: [Errno 2] No such file or directory: ‘C:/desktop/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv’

I still don’t have a clue to what error I am making as I am unable to decipher the msg. Is there a way to upload csv files and pull it in into to the my working notebook *.ipynp.
Kindly let me know, anyone.
Best
Subhransu

First, make sure it isn’t a simple typo. Are you copying the file paths and file name from the command line or file properties and pasting it in or are you typing it yourself? You should endeavor to type as little as you can always. (Later in this post, I mention using %cd , pwd, and ls in your .ipynb cells and what I say about that is another route to making sure you get correct paths and file names, and so you should keep that in mind as a resource for this, too.)


If you are using Anaconda on your own computer to launch Jupyter running on your own machine, the easiest solution to open the file dmrtn.csv is to put it in the same exact directory as your running .ipynb file.

Then make your read command the following:

data =pd.read_csv(“dmrtn.csv”)

To go to another route of troubleshooting this, do that with a different tiny CSV you get off the internet that you know is good! (Here is an example one.) Then when you get that working go back to using a new, simple CSV file you make and move it elsewhere in your file hierarchy and see if accessing that works. Use both absolute paths and relative ones to attempt to read the simple CSV file in various places on your system. (If you don’t know what that means go and read about it in some Python literature.) Then finally try the one that prompted this post. It may indeed be your CSV file is the issue in addition to the filepath issue.

After you have at least the simple CSV files working you can also move them elsewhere in your file hierarchy on your system and then try to read the CSV data into Pandas from there. This will be trickier!! You may need to use some tips I have in the next section to help you with this.

What is going on though?

Your unformatted traceback (see below) makes it hard to be sure, however, I think the key thing is this:

FileNotFoundError: [Errno 2] No such file or directory: ‘C:/desktop/ChartGPT_n_Research/New_Research_Contagion/dmrtn.csv’

Bear in mind that Windows paths are notoriously tricky. This is why you have probably heard people saying if you what to do data science stuff, you may have an easier time with using Unix based systems like Mac or Linux. Or use WSL on your windows machine. In support of Windows’ paths being tricky, see python - How do you load a csv file format to jupyter notebook? - Stack Overflow and python - whitespaces in the path of windows filepath - Stack Overflow and How should I write a Windows path in a Python string literal? - Stack Overflow. I particularly like furas’ succinct comment here about the r prefix or escaping the slashes.

Work through reading about those issues of other Windows users and further work on debugging yours and you may figure out what was going on that prompted you to post here.

Finally depending on how your system is set up there may be places Jupyter cannot access. You should investigate this so you know. So use a combination of the navigating with %cd to try and change the working directory from inside your running .ipynb in combination with using ls in the next cell to check the contents and pwd in a cell to be sure the working directory is what you think it is. Start small by first just changing the directory to be the one above your initial working directory, using %cd .. and then go further. (Hopefully %cd .. works on Windows.) Then work this in to combine with the other troubleshooting approaches I’ve mentioned now to investigate all this and what Jupyter can see and cannot see and how you should point Python at things in different locations on your system. OneDrive on Windows is indeed notorious for putting files in different locations than users think. See Jupyter-notebook does not show my desktop folders - Stack Overflow and also this issue entitled ‘Jupyter does not recognize files on my Desktop and will not create files to Desktop’. I see your file path does involve Desktop. Are you a OneDrive user?


Please learn about posting code text in a way that will retain the formatting in forums such as this.
Specifically see about ‘block code formatting’ here and see about ‘fenced code blocks’ here. They are both the same thing if you look into the details. They just have slightly different terms used in the two locations.


I don’t know what point you are at in learning but it sort of goes without saying that coursework is a jumping off point for you to go on and learn. There isn’t possibly enough time to cover everything in a course.

Plus, that doesn’t really make this a suitable place for this discussion. By covering the use of %cd in the running .ipynb file and how sometimes Jupyter doesn’t have access to everywhere on your system, I may have steered this towards being pertinent here.

Dear Fomightez, thank you very much for your detailed input. They are very very useful. Finally, I could manage to run pd.read_csv with data =pd.read_csv(‘Desktop/DMRTN.csv’) and print(data). I am very grateful. I know, as I work on Python and data, I would have some issues or the other, possibly I look for some experts like you for further assistance.

1 Like