Complex features can be created as you wish, based on the ones already existing. Anything that makes sense to you can be created.
There are still some basic features that need to be preprocessed in order to be used : categorical features( like 'Category', 'Type') and ordinal features (like 'Price').
The ordinal features will be treated by keeping only the numbers.
The categorical ones can be transformed in many ways, using dummy, one-hot or factorization encoding.
Our data has over 1,000 missing values for the "Ratings" column. That's around 10% of the data. To not let this data go to waste, we can try to build a predictor for this column.
Removing data outliers can improve model accuracy.
Outliers are data points that do not conform to the true distribution of data and can be more easily interpreted as noise. They are also, usually just a few, compared to the overall data quantity.
Statistical methods such as z-score or plots can be used to detect outliers.
Disclaimer
Examples below are just conceptual and not based on real-world data.
Example 1 : Outliers
Applications usually cost below 20$, but there are 100 scams that cost 400$ (out of all the 7000 apps)
Example 2 : Not outliers
There are very few apps that have rating 2 out of 5 : 200 apps. These are not outliers because apps with rating 1 are usually very few, people not taking time to rate apps that are not too bad and not too good.
Example 3 : Outliers
There are a few apps with exactly 5 out of 5 : 200 apps
If the 5 rating apps are a scam (like only ratings from friends and family :)) ), then it is an outlier.
Example 4: Not outliers, but may be
There are only around 200 apps with over 2 million installs. These are not scams, cause you can't fake installs at this level. So they are in the true distribution.
But, if we are interested in predicting if an apps is good ('good' meaning over 500,000 installs, for example) , and the majority of apps have below 1 million installs, those 200 with over 2 million are considered outliers. They are not of interest to our problem, are just a few and so far away from our mean that it may hurt our model's performance and learning. Because the model will alway try to accomodate these points in its generalization, while we actually don't care if an app is 'so fucking good', in this context. We only care if it is good.
Predicting 'so fucking good' apps with so little examples fall in the anomaly detection category and are a different kind of machine learning problem