Fernando, L., Bindra, H., & Daudjee, K. (2023). An Experimental Analysis of Quantile Sketches Over Data Streams Presented at the An Experimental Analysis of Quantile Sketches Over Data Streams Primary Tabs View conference. https://doi.org/10.48786/edbt.2023.34
References
Filter by:
Zhang, S., & He, X. (2023). DProvDB: Differentially Private Query Processing With Multi-Analyst Provenance ArXiv, abs/2309.10240. https://doi.org/10.48550/arXiv.2309.10240
Adeyemi, M., Oladipo, A., Pradeep, R., & Lin, J. (2023). Zero-Shot Cross-Lingual Reranking With Large Language Models for Low-Resource Languages ArXiv, abs/2312.16159. https://doi.org/10.48550/ARXIV.2312.16159
Kamalloo, E., Dziri, N., Clarke, C., & Rafiei, D. (2023). Evaluating Open-Domain Question Answering in the Era of Large Language Models ArXiv, abs/2305.06984. https://doi.org/10.48550/arXiv.2305.06984
Tang, R., Zhang, X., Lin, J., & Türe, F. (2023). What Do Llamas Really Think? Revealing Preference Biases in Language Model Representations ArXiv, abs/2311.18812. https://doi.org/10.48550/ARXIV.2311.18812
Tang, R., Zhang, X., Ma, X., Lin, J., & Türe, F. (2023). Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models ArXiv, abs/2310.07712. https://doi.org/10.48550/ARXIV.2310.07712
Ilyas, I., Lacerda, J. P., Li, Y., Minhas, U. F., Mousavi, A., Pound, J., … Sumanth, C. (2023). Growing and Serving Large Open-Domain Knowledge Graphs ArXiv, abs/2305.09464. https://doi.org/10.48550/arXiv.2305.09464
Tamber, M. S., Pradeep, R., & Lin, J. (2023). Scaling Down, LiTting Up: Efficient Zero-Shot Listwise Reranking With Seq2seq Encoder-Decoder Models ArXiv, abs/2312.16098. https://doi.org/10.48550/ARXIV.2312.16098
Zhong, W., Xie, Y., & Lin, J. (2023). Answer Retrieval for Math Questions Using Structural and Dense Retrieval Presented at the Retrieval for Math Questions Using Structural and Dense Retrieval conference. https://doi.org/10.1007/978-3-031-42448-9_18
Huang, C., Xie, Y., Jiang, Z., Lin, J., & Li, M. (2023). Approximating Human-Like Few-Shot Learning With GPT-based Compression ArXiv, abs/2308.06942. https://doi.org/10.48550/arXiv.2308.06942