使用python pandas dataframe学习数据分析

⚠️ NoteThis post is a part of Learning data analysis with python series. If you haven’t read the first post, some of the content won’t make sense. Check it out here.

Note️ 注意 -这篇文章是使用python系列学习数据分析的一部分。 如果您还没有阅读第一篇文章,那么其中一些内容将毫无意义。 在这里查看 。

In the previous article, we talked about Pandas Series, working with real world data and handling missing values in data. Although series are very useful but most real world datasets contain multiple rows and columns and that’s why Dataframes are used much more than series. In this post, we’ll talk about dataframe and some operations that we can do on dataframe objects.

在上一篇文章中,我们讨论了Pandas系列,它使用现实世界的数据并处理数据中的缺失值。 尽管序列非常有用,但是大多数现实世界的数据集都包含多个行和列,这就是为什么使用数据框比序列更多的原因。 在本文中,我们将讨论数据框以及我们可以对数据框对象执行的一些操作。

什么是DataFrame? (What is a DataFrame?)

As we saw in the previous post, A Series is a container of scalars,A DataFrame is a container for Series. It’s a dictionary like data structure for Series. A DataFrame is similar to a two-dimensional hetrogeneous tabular data(SQL table). A DataFrame is created using many different types of data such as dictionary of Series, dictionary of ndarrays/lists, a list of dictionary, etc. We’ll look at some of these methods to create a DataFrame object and then we’ll see some operations that we can apply on a DataFrame object to manipulate the data.

正如我们在上一篇文章中看到的,A Series是标量的容器,DataFrame是Series的容器。 这是一个字典,类似于Series的数据结构。 DataFrame类似于二维异构表格数据(SQL表)。 DataFrame是使用许多不同类型的数据创建的,例如Series字典,ndarrays / lists字典,字典列表等。我们将研究其中的一些方法来创建DataFrame对象,然后再看一些我们可以应用到DataFrame对象上的操作来操纵数据。

使用Series字典的DataFrame (DataFrame using dictionary of Series)

In[1]:
d = {
'col1' : pd.Series([1,2,3], index = ["row1", "row2", "row3"]),
'col2' : pd.Series([4,5,6], index = ["row1", "row2", "row3"])
} df = pd.DataFrame(d)Out[1]:
col1 col2
row1 1 4
row2 2 5
row3 3 6

As shown in above code, the keys of dict of Series becomes column names of the DataFrame and the index of the Series becomes the row name and all the data gets mapped by the row name i.e.,order of the index in the Series doesn’t matter.

如上面的代码所示,Series的dict键成为DataFrame的列名,Series的索引成为行名,并且所有数据都按行名映射,即Series中索引的顺序不物。

使用ndarrays / list的DataFrame (DataFrame using ndarrays/lists)

In[2]:
d = {
'one' : [1.,2.,3.],
'two' : [4.,5.,6.]
} df = pd.DataFrame(d)Out[2]:
one two
0 1.0 4.0
1 2.0 5.0
2 3.0 6.0

As shown in the above code, when we use ndarrays/lists, if we don’t pass the index then the range(n) becomes the index of the DataFrame.And while using the ndarray to create a DataFrame, the length of these arrays must be same and if we pass an explicit index then the length of this index must also be of same length as the length of the arrays.

如上面的代码所示,当我们使用ndarrays / lists时,如果不传递索引,则range(n)将成为DataFrame的索引。当使用ndarray创建DataFrame时,这些数组的长度必须相同,并且如果我们通过显式索引,则该索引的长度也必须与数组的长度相同。

使用字典列表的DataFrame (DataFrame using list of dictionaries)

In[3]:
d = [
{'one': 1, 'two' : 2, 'three': 3},
{'one': 10, 'two': 20, 'three': 30, 'four': 40}
] df = pd.DataFrame(d)Out[3]:
one two three four
0 1 2 3 NaN
1 10 20 30 40.0
In[4]:
df = pd.DataFrame(d, index=["first", "second"])Out[4]: one two three four
first 1 2 3 NaN
second 10 20 30 40.0

And finally, as described above we can create a DataFrame object using a list of dictionary and we can provide an explicit index in this method,too.

最后,如上所述,我们可以使用字典列表创建DataFrame对象,并且也可以在此方法中提供显式索引。

Although learning to create a DataFrame object using these methods is necessary but in real world, we won’t be using these methods to create a DataFrame but we’ll be using external data files to load data and manipulate that data. So, let’s take a look how to load a csv file and create a DataFrame.In the previous post, we worked with the Nifty50 data to demonstrate how Series works and similarly in this post, we’ll load Nifty50 2018 data, but in this dataset we have data of Open, Close, High and Low value of Nifty50. First let’s see what this dataset looks like and then we’ll load it into a DataFrame.

尽管学习使用这些方法创建DataFrame对象是必要的,但在现实世界中,我们不会使用这些方法来创建DataFrame,而是将使用外部数据文件加载数据并操纵该数据。 因此,让我们看一下如何加载一个csv文件并创建一个DataFrame。在上一篇文章中,我们使用Nifty50数据来演示Series的工作原理,与此类似,在本文中,我们将加载Nifty50 2018数据,但是在本文中在数据集中,我们具有Nifty50的开盘价,收盘价,高价和低价的数据。 首先,让我们看看该数据集的外观,然后将其加载到DataFrame中。

Image for post
Nifty 50 Data (2018)
Nifty 50数据(2018)
In[5]:
df = pd.read_csv('NIFTY50_2018.csv')Out[5]:
Date Open High Low Close
0 31 Dec 2018 10913.20 10923.55 10853.20 10862.55
1 28 Dec 2018 10820.95 10893.60 10817.15 10859.90
2 27 Dec 2018 10817.90 10834.20 10764.45 10779.80
3 26 Dec 2018 10635.45 10747.50 10534.55 10729.85
4 24 Dec 2018 10780.90 10782.30 10649.25 10663.50
... ... ... ... ... ...
241 05 Jan 2018 10534.25 10566.10 10520.10 10558.85
242 04 Jan 2018 10469.40 10513.00 10441.45 10504.80
243 03 Jan 2018 10482.65 10503.60 10429.55 10443.20
244 02 Jan 2018 10477.55 10495.20 10404.65 10442.20
245 01 Jan 2018 10531.70 10537.85 10423.10 10435.55In[6]:
df = pd.read_csv('NIFTY50_2018.csv', index_col=0)Out[6]:
Open High Low Close
Date
31 Dec 2018 10913.20 10923.55 10853.20 10862.55
28 Dec 2018 10820.95 10893.60 10817.15 10859.90
27 Dec 2018 10817.90 10834.20 10764.45 10779.80
26 Dec 2018 10635.45 10747.50 10534.55 10729.85
24 Dec 2018 10780.90 10782.30 10649.25 10663.50
... ... ... ... ...
05 Jan 2018 10534.25 10566.10 10520.10 10558.85
04 Jan 2018 10469.40 10513.00 10441.45 10504.80
03 Jan 2018 10482.65 10503.60 10429.55 10443.20
02 Jan 2018 10477.55 10495.20 10404.65 10442.20
01 Jan 2018 10531.70 10537.85 10423.10 10435.55

As shown above, we have loaded the dataset and created a DataFrame called df and looking at the data, we can see that we can set the index of our DataFrame to the Date column and in the second cell we did that by providing the index_col parameter in the read_csv method.

如上所示,我们已经加载了数据集并创建了一个名为df的DataFrame并查看数据,我们可以看到可以将DataFrame的索引设置为Date列,在第二个单元格中,我们通过提供index_col参数来完成此操作在read_csv方法中。

There are many more parameters available in the read_csv method such as usecols using which we can deliberately ask the pandas to only load provided columns, na_values to provide explicit values that pandas should identify as null values and so on and so forth. Read more about all the parameters in pandas documentation.

read_csv方法中还有许多可用的参数,例如usecols ,我们可以使用这些参数故意要求熊猫仅加载提供的列,使用na_values提供熊猫应将其标识为空值的显式值,依此类推。 在pandas 文档中阅读有关所有参数的更多信息。

Now, let’s look at some of the basic operations that we can perform on the dataframe object in order to learn more about our data.

现在,让我们看一下可以对dataframe对象执行的一些基本操作,以了解有关数据的更多信息。

In[7]:
# Shape(Number of rows and columns) of the DataFrame
df.shapeOut[7]:
(246,4)In[8]:
# List of index
df.indexOut[8]:
Index(['31 Dec 2018', '28 Dec 2018', '27 Dec 2018', '26 Dec 2018',
'24 Dec 2018', '21 Dec 2018', '20 Dec 2018', '19 Dec 2018',
'18 Dec 2018', '17 Dec 2018',
...
'12 Jan 2018', '11 Jan 2018', '10 Jan 2018', '09 Jan 2018',
'08 Jan 2018', '05 Jan 2018', '04 Jan 2018', '03 Jan 2018',
'02 Jan 2018', '01 Jan 2018'],
dtype='object', name='Date', length=246)In[9]:
# List of columns
df.columnsOut[9]:
Index(['Open', 'High', 'Low', 'Close'], dtype='object')In[10]:
# Check if a DataFrame is empty or not
df.emptyOut[10]:
False

It’s very crucial to know data types of all the columns because sometimes due to corrupt data or missing data, pandas may identify numeric data as ‘object’ data-type which isn’t desired as numeric operations on the ‘object’ type of data is costlier in terms of time than on float64 or int64 i.e numeric datatypes.

了解所有列的数据类型非常关键,因为有时由于损坏的数据或缺少的数据,大熊猫可能会将数字数据标识为“对象”数据类型,这是不希望的,因为对“对象”类型的数据进行数字运算是在时间上比在float64int64上更昂贵,即数值数据类型。

In[11]:
# Datatypes of all the columns
df.dtypesOut[11]:
Open float64
High float64
Low float64
Close float64
dtype: object

We can use iloc and loc to index and get the particular data from our dataframe.

我们可以使用iloc和loc进行索引并从数据框中获取特定数据。

In[12]:
# Indexing using implicit index
df.iloc[0]Out[12]:
Open 10913.20
High 10923.55
Low 10853.20
Close 10862.55
Name: 31 Dec 2018, dtype: float64In[13]:
# Indexing using explicit index
df.loc["01 Jan 2018"]Out[13]:
Open 10531.70
High 10537.85
Low 10423.10
Close 10435.55
Name: 01 Jan 2018, dtype: float64

We can also use both row and column to index and get specific cell from our dataframe.

我们还可以使用行和列来索引并从数据框中获取特定的单元格。

In[14]:
# Indexing using both the axes(rows and columns)
df.loc["01 Jan 2018", "High"]Out[14]:
10537.85

We can also perform all the math operations on a dataframe object same as we did on series.

我们也可以像处理序列一样对数据框对象执行所有数学运算。

In[15]:
# Basic math operations
df.add(10)Out[15]:
Open High Low Close
Date
31 Dec 2018 10923.20 10933.55 10863.20 10872.55
28 Dec 2018 10830.95 10903.60 10827.15 10869.90
27 Dec 2018 10827.90 10844.20 10774.45 10789.80
26 Dec 2018 10645.45 10757.50 10544.55 10739.85
24 Dec 2018 10790.90 10792.30 10659.25 10673.50
... ... ... ... ...
05 Jan 2018 10544.25 10576.10 10530.10 10568.85
04 Jan 2018 10479.40 10523.00 10451.45 10514.80
03 Jan 2018 10492.65 10513.60 10439.55 10453.20
02 Jan 2018 10487.55 10505.20 10414.65 10452.20
01 Jan 2018 10541.70 10547.85 10433.10 10445.55

We can also aggregate the data using the agg method. For instance, we can get the mean and median values from all the columns in our data using this method as show below.

我们还可以使用agg方法汇总数据。 例如,可以使用此方法从数据中所有列获取平均值和中值,如下所示。

In[16]:
# Aggregate one or more operations
df.agg(["mean", "median"])Out[16]:
Open High Low Close
mean 10758.260366 10801.753252 10695.351423 10749.392276
median 10704.100000 10749.850000 10638.100000 10693.000000

However, pandas provide a more convenient method to get a lot more than just minimum and maximum values across all the columns in our data. And that method is describe. As the name suggests, it describes our dataframe by applying mathematical and statistical operations across all the columns.

但是,熊猫提供了一种更便捷的方法,不仅可以在我们数据的所有列中获得最大值和最小值。 并且describe该方法。 顾名思义,它通过在所有列上应用数学和统计运算来描述我们的数据框。

In[17]:
df.describe()Out[17]:
Open High Low Close
count 246.000000 246.000000 246.000000 246.000000
mean 10758.260366 10801.753252 10695.351423 10749.392276
std 388.216617 379.159873 387.680138 382.632569
min 9968.800000 10027.700000 9951.900000 9998.050000
25% 10515.125000 10558.650000 10442.687500 10498.912500
50% 10704.100000 10749.850000 10638.100000 10693.000000
75% 10943.100000 10988.075000 10878.262500 10950.850000
max 11751.800000 11760.200000 11710.500000 11738.500000

And to get the name, data types and number of non-null values in each columns, pandas provide info method.

为了获取每列中的名称,数据类型和非空值的数量,pandas提供了info方法。

In[18]:
df.info()Out[18]:
<class 'pandas.core.frame.DataFrame'>
Index: 246 entries, 31 Dec 2018 to 01 Jan 2018
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Open 246 non-null float64
1 High 246 non-null float64
2 Low 246 non-null float64
3 Close 246 non-null float64
dtypes: float64(4)
memory usage: 19.6+ KB

We are working with a small data with less than 300 rows and thus, we can work with all the rows but when we have tens or hundreds of thousand rows in our data, it’s very difficult to work with such huge number of data. In statistics, ‘sampling’ is a technique that solves this problem. Sampling means to choose a small amount of data from the whole dataset such that the sampling dataset contains somewhat similar features in terms of diversity as that of the whole dataset. Now, it’s almost impossible to manually select such peculiar rows but as always, pandas comes to our rescue with the sample method.

我们正在处理少于300行的小型数据,因此,我们可以处理所有行,但是当我们的数据中有成千上万的行时,处理如此大量的数据非常困难。 在统计中,“采样”是一种解决此问题的技术。 抽样是指从整个数据集中选择少量数据,以使抽样数据集在多样性方面包含与整个数据集相似的特征。 现在,几乎不可能手动选择这种特殊的行,但是与往常一样,大熊猫通过sample方法来帮助我们。

In[19]:
# Data Sampling - Get random n examples from the data.
df.sample(5)Out[19]:
Open High Low Close
Date
04 Jul 2018 10715.00 10777.15 10677.75 10769.90
22 Jun 2018 10742.70 10837.00 10710.45 10821.85
14 Mar 2018 10393.05 10420.35 10336.30 10410.90
09 Jan 2018 10645.10 10659.15 10603.60 10637.00
27 Apr 2018 10651.65 10719.80 10647.55 10692.30

But, executing this method produces different results everytime and that may be unacceptable in some cases. But that can be solved by providing random_state parameter in the sample method to reproduce same result everytime.

但是,每次执行此方法都会产生不同的结果,在某些情况下可能是不可接受的。 但这可以通过在sample方法中提供random_state参数来每次重现相同的结果来解决。

As shown above, we can perform many operations on the DataFrame object to get information of the DataFrame and from the DataFrame. These are just basic operations that we can perform on the DataFrame object, there are many more interesting methods and operations that we can perform on the DataFrame object such as pivot , merge , join and many more. Also, in this given dataset, we have time as the index of our DataFrame i.e this is the TimeSeries dataset and pandas also provide many methods to manipulate the TimeSeries data such as rolling_window.

如上所示,我们可以对DataFrame对象执行许多操作,以获取DataFrame的信息以及从DataFrame获取信息。 这些只是基本的操作,我们可以将数据帧对象执行,还有很多更有趣的方法和操作,我们可以如数据框对象进行pivotmergejoin等等。 同样,在给定的数据集中,我们以时间作为DataFrame的索引,即,这是TimeSeries数据集,而pandas也提供了许多方法来操纵TimeSeries数据,例如rolling_window

That will be all for this post. In the next post we’ll look at some of these methods and we’ll perform 5 analysis tasks using these methods. Till then, you can take a look at the pandas documentation and find more information about DataFrame objects and the methods that can be applied on the DataFrame object.

这就是这篇文章的全部内容。 在下一篇文章中,我们将介绍其中一些方法,并使用这些方法执行5个分析任务。 到那时,您可以看一下pandas文档 ,找到有关DataFrame对象以及可应用于DataFrame对象的方法的更多信息。

Originally Published At : https://www.bytetales.co/pandas-data-frames-learning-data-analysis-with-python/

最初发布于: https : //www.bytetales.co/pandas-data-frames-learning-data-analysis-with-python/

Thank you for reading!

感谢您的阅读!

翻译自: https://medium.com/byte-tales/learning-data-analysis-with-python-pandas-dataframe-2f2d40d6c11f

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/389209.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

无向图g的邻接矩阵一定是_矩阵是图

无向图g的邻接矩阵一定是To study structure,tear away all flesh soonly the bone shows.要研究结构&#xff0c;请尽快撕掉骨头上所有的肉。 Linear algebra. Graph theory. If you are a data scientist, you have encountered both of these fields in your study or work …

前端绘制绘制图表_绘制我的文学风景

前端绘制绘制图表Back when I was a kid, I used to read A LOT of books. Then, over the last couple of years, movies and TV series somehow stole the thunder, and with it, my attention. I did read a few odd books here and there, but not with the same ferocity …

如何描绘一个vue的项目_描绘了一个被忽视的幽默来源

如何描绘一个vue的项目Source)来源 ) Data visualization is a great way to celebrate our favorite pieces of art as well as reveal connections and ideas that were previously invisible. More importantly, it’s a fun way to connect things we love — visualizing …

数据存储加密和传输加密_将时间存储网络应用于加密预测

数据存储加密和传输加密I’m not going to string you along until the end, dear reader, and say “Didn’t achieve anything groundbreaking but thanks for reading ;)”.亲爱的读者&#xff0c;我不会一直待到最后&#xff0c;然后说&#xff1a; “没有取得任何开创性的…

熊猫分发_熊猫新手:第一部分

熊猫分发For those just starting out in data science, the Python programming language is a pre-requisite to learning data science so if you aren’t familiar with Python go make yourself familiar and then come back here to start on Pandas.对于刚接触数据科学的…

多线程 进度条 C# .net

前言  在我们应用程序开发过程中&#xff0c;经常会遇到一些问题&#xff0c;需要使用多线程技术来加以解决。本文就是通过几个示例程序给大家讲解一下多线程相关的一些主要问题。 执行长任务操作  许多种类的应用程序都需要长时间操作&#xff0c;比如&#xff1a;执行一…

《Linux内核原理与分析》第六周作业

课本&#xff1a;第五章 系统调用的三层机制&#xff08;下&#xff09; 中断向量0x80和system_call中断服务程序入口的关系 0x80对应着system_call中断服务程序入口&#xff0c;在start_kernel函数中调用了trap_init函数&#xff0c;trap_init函数中调用了set_system_trap_gat…

Codeforces Round 493

心情不好&#xff0c;被遣散回学校 &#xff0c;心态不好 &#xff0c;为什么会累&#xff0c;一直微笑就好了 #include<bits/stdc.h> using namespace std; int main() {freopen("in","r",stdin);\freopen("out","w",stdout);i…

android动画笔记二

从android3.0&#xff0c;系统提供了一个新的动画&#xff0d;property animation, 为什么系统会提供这样一个全新的动画包呢&#xff0c;先来看看之前的补间动画都有什么缺陷吧1、传统的补间动画都是固定的编码&#xff0c;功能是固定的&#xff0c;扩展难度大。比如传统动画只…

回归分析检验_回归分析

回归分析检验Regression analysis is a reliable method in statistics to determine whether a certain variable is influenced by certain other(s). The great thing about regression is also that there could be multiple variables influencing the variable of intere…

是什么样的骚操作让应用上线节省90%的时间

优秀的程序员 总会想着 如何把花30分钟才能解决的问题 在5分钟内就解决完 例如在应用上线这件事上 通常的做法是 构建项目在本地用maven打包 每次需要clean一次&#xff0c;再build一次 部署包在本地ide、git/svn、maven/gradie 及代码仓库、镜像仓库和云平台间 来回切换 上传部…

Ubuntu 18.04 下如何配置mysql 及 配置远程连接

首先是大家都知道的老三套&#xff0c;啥也不说上来就放三个大招&#xff1a; sudo apt-get install mysql-serversudo apt isntall mysql-clientsudo apt install libmysqlclient-dev 这三步下来mysql就装好了&#xff0c;然后我们偷偷检查一下 sudo netstat -tap | grep mysq…

数据科学与大数据技术的案例_主数据科学案例研究,招聘经理的观点

数据科学与大数据技术的案例I’ve been in that situation where I got a bunch of data science case studies from different companies and I had to figure out what the problem was, what to do to solve it and what to focus on. Conversely, I’ve also designed case…

队列的链式存储结构及其实现_了解队列数据结构及其实现

队列的链式存储结构及其实现A queue is a collection of items whereby its operations work in a FIFO — First In First Out manner. The two primary operations associated with them are enqueue and dequeue.队列是项目的集合&#xff0c;由此其操作以FIFO(先进先出)的方…

cad2016珊瑚_预测有马的硬珊瑚覆盖率

cad2016珊瑚What’s the future of the world’s coral reefs?世界珊瑚礁的未来是什么&#xff1f; In February of 2020, scientists at University of Hawaii Manoa released a study addressing this very question. The models they developed forecasted a 70–90% worl…

EChart中使用地图方式总结(转载)

EChart中使用地图方式总结 2018年02月06日 22:18:57 来源&#xff1a;https://blog.csdn.net/shaxiaozilove/article/details/79274772最近在仿照EChart公交线路方向示例&#xff0c;开发表示排水网和污水网流向地图&#xff0c;同时地图上需要叠加排放口、污染源、污水处理厂等…

android mvp模式

越来越多人讨论mvp模式&#xff0c;mvp在android应用开发中获得更多的重视&#xff0c;这里说一下对MVP的简单了解。 什么是 MVP? MVP模式使逻辑从视图层分开&#xff0c;目的是我们在屏幕上怎么表现&#xff0c;和界面如何工作的所有事情就完全分开了。 View显示数据&…

Node.js REPL(交互式解释器)

2019独角兽企业重金招聘Python工程师标准>>> Node.js REPL(交互式解释器) Node.js REPL(Read Eval Print Loop:交互式解释器) 表示一个电脑的环境&#xff0c;类似 Window 系统的终端或 Unix/Linux shell&#xff0c;我们可以在终端中输入命令&#xff0c;并接收系统…

用python进行营销分析_用python进行covid 19分析

用python进行营销分析Python is a highly powerful general purpose programming language which can be easily learned and provides data scientists a wide variety of tools and packages. Amid this pandemic period, I decided to do an analysis on this novel coronav…

Alpha冲刺第二天

Alpha第二天 1.团队成员 郑西坤 031602542 &#xff08;队长&#xff09; 陈俊杰 031602504陈顺兴 031602505张胜男 031602540廖钰萍 031602323雷光游 031602319苏芳锃 0316023302.项目燃尽图 3.项目进展 时间工作内容11月18日UI设计、初步架构搭建11月19日UI设计、服务器的进一…