Valve’s multi-player games, as well as Steam (Valve’s successful trading platform), have allowed for the spontaneous emergence of complex virtual, yet quite real, economies. These economies are replete with rich trading patterns, fascinating ‘institutions’ (which have also sprung up organically), socio-economic conventions, and, generally, a host of economic phenomena that partly reflect what we observe in the analogue world and partly constitute new and unexplored behavioural patterns.
The task of a Valve economist is to make good use of the incredible wealth of data concerning these social economies, to pose fresh questions about their workings, and to generate methods for converting new knowledge about these economic vistas into tangible ideas that help improve our customers’ experiences.
Research, design, develop, and validate economic models to explain user behavior for all of Valve’s products.
Design experiments to validate experimental hypotheses for in-game economies.
Provide insight into short- and long-term behavioral patterns of participants in virtual economies.
Inform decision-making at Valve by providing quantitative and economic rationale for various lines of inquiry.
Create new avenues of analysis based on existing economic metrics, as well as generating new domains of data to collect and investigate.
Collaborate with our business development team to improve the performance of existing pricing strategies and incentives for our customers and partners.
Graduate degree in Economics or related field
Advanced knowledge of statistics
Four years experience with:
Econometrics/data-mining or related field
Relevant analysis techniques that inform the creation of economic models
Proficiency in one or more of the following programming languages: C++, SQL, PHP, or equivalent
Ou como estatístico:
One thing we have a lot of at Valve is data. Lots and lots of data. And we love using all that information to make better decisions. To help us do that, we’re looking for an experienced statistician who can perform quantitative analyses on all aspects of Valve’s gameplay, financial, and company data. Intrigued? You’d get to use your extensive statistical knowledge, along with practiced data-mining skills, to derive insights from the immense volumes of data we collect. In addition, you’d improve Valve’s existing metrics collection and analysis techniques by formulating new lines of inquiry into untracked metrics and creating best practices for the analysis of all collected data.
Research, design, develop, and validate statistical models to explain past behavior and to predict future behavior across all Valve products.
Uncover latent trends in user behavior by mining existing statistical databases.
Generate new lines of inquiry by creating novel metrics to incorporate into our existing tracking databases.
Inform decision-making at Valve by providing quantitative rationale for various decision alternatives.
Empirically evaluate financial projections and game design hypotheses.
Graduate degree in Statistics or Applied Mathematics (or equivalent) field
Extensive proficiency with one or more of the following pieces of data analysis software: SPSS, Systat, Matlab, R, or equivalent
Four years experience with:
Statistics/data modeling in an applied context
Relevant statistical techniques to inform the creation of predictive models
Proficiency in one or more of the following programming languages: C++, SQL, PHP (or equivalent)
– Desde 2005, o Quarterly Journal of Political Science solicita aos autores os dados e códigos necessários para a replicação de seus papers. Com isso, o periódico faz uma revisãobem básica: apenas roda o que foi enviado pelos autores – as is – e verifica se os resultados são os mesmos apresentados pelo artigo. Este processo simples tem valido a pena? Segundo Nicholas Eubank, sim:
Experience has shown the answer is an unambiguous “yes.”Of the 24 empirical papers subject to in-house replication review since September 2012,  only 4 packages required no modifications. Of the remaining 20 papers, 13 had code that would not execute without errors, 8 failed to include code for results that appeared in the paper,  and 7 failed to include installation directions for software dependencies. Most troubling, however, 13 (54 percent) had results in the paper that differed from those generated by the author’s own code. Some of these issues were relatively small — likely arising from rounding errors during transcription — but in other cases they involved incorrectly signed or mis-labeled regression coefficients, large errors in observation counts, and incorrect summary statistics. Frequently, these discrepancies required changes to full columns or tables of results. Moreover, Zachary Peskowitz, who served as the QJPS replication assistant from 2010 to 2012, reports similar levels of replication errors during his tenure as well. The extent of the issues — which occurred despite authors having been informed their packages would be subject to review — points to the necessity of this type of in-house interrogation of code prior to paper publication.
Fica a pergunta: quantos journals brasileiros fazem isso?
Será que você – ou o seu programa de doutorado – está em sintonia com as demandas de um economista/cientista de dados moderno, como um economista no facebook?
Segue abaixo a tradução livre que fiz dos trechos relevantes de uma oferta de emprego:
O Facebook está buscando economistas excepcionais para se juntar à nossa equipe de Ciência de Dados. Os indivíduos deverão ter uma compreensão profunda da análise causal – desde a criação e análise de experimentosaté o trabalho com dados complexos ou não estruturados. Economistas no Facebook criam e executam projetos em áreas como o design de mercado online, previsão, análise de redes, design de leilão, comportamento do consumidor e economia comportamental.
Algumas habilidades requeridas ou desejáveis:
Doutorado em Economia ou um campo relevante;
Ampla experiência na resolução de problemas analíticos utilizando abordagens quantitativas;
Confortável com a manipulação e análise de dados complexos, de alto volume e alta-dimensionalidade de fontes variadas;
Conhecimento especializado de uma ferramenta de análise, tais como R, Matlab, ou Stata;
Experiência com os dados on-line: a mineração da web social, webscraping de websites, puxar dados de APIs, etc;
Confortável na linha de comando e com ferramentas unix;
Fluência em pelo menos uma linguagem de script como Python ou Ruby;
Familiaridade com bancos de dados relacionais e SQL;
Experiência de trabalho com grandes conjuntos de dados ou ferramentas de computação distribuída(Map/Reduce, Hadoop, Hive, etc.).