May 8 01:24:43.491718 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 01:24:43.491732 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 01:24:43.491738 kernel: BIOS-provided physical RAM map: May 8 01:24:43.491744 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 8 01:24:43.491748 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 8 01:24:43.491752 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 8 01:24:43.491757 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 8 01:24:43.491761 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 8 01:24:43.491765 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000008266dfff] usable May 8 01:24:43.491769 kernel: BIOS-e820: [mem 0x000000008266e000-0x000000008266efff] ACPI NVS May 8 01:24:43.491774 kernel: BIOS-e820: [mem 0x000000008266f000-0x000000008266ffff] reserved May 8 01:24:43.491778 kernel: BIOS-e820: [mem 0x0000000082670000-0x000000008afccfff] usable May 8 01:24:43.491783 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 8 01:24:43.491788 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 8 01:24:43.491793 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 8 01:24:43.491798 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 8 01:24:43.491803 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 8 01:24:43.491808 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 8 01:24:43.491813 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 01:24:43.491817 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 8 01:24:43.491822 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 8 01:24:43.491827 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 01:24:43.491831 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 8 01:24:43.491836 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 8 01:24:43.491841 kernel: NX (Execute Disable) protection: active May 8 01:24:43.491845 kernel: APIC: Static calls initialized May 8 01:24:43.491850 kernel: SMBIOS 3.2.1 present. May 8 01:24:43.491855 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 May 8 01:24:43.491861 kernel: tsc: Detected 3400.000 MHz processor May 8 01:24:43.491865 kernel: tsc: Detected 3399.906 MHz TSC May 8 01:24:43.491870 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 01:24:43.491876 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 01:24:43.491880 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 8 01:24:43.491885 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 8 01:24:43.491890 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 01:24:43.491895 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 8 01:24:43.491900 kernel: Using GB pages for direct mapping May 8 01:24:43.491905 kernel: ACPI: Early table checksum verification disabled May 8 01:24:43.491911 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 8 01:24:43.491916 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 8 01:24:43.491923 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 8 01:24:43.491928 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 8 01:24:43.491933 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 8 01:24:43.491938 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 8 01:24:43.491944 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 8 01:24:43.491950 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 8 01:24:43.491955 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 8 01:24:43.491960 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 8 01:24:43.491965 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 8 01:24:43.491970 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 8 01:24:43.491976 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 8 01:24:43.491982 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 8 01:24:43.491987 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 8 01:24:43.491992 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 8 01:24:43.491997 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 8 01:24:43.492002 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 8 01:24:43.492007 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 8 01:24:43.492012 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 8 01:24:43.492018 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 8 01:24:43.492023 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 8 01:24:43.492029 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 8 01:24:43.492034 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 8 01:24:43.492039 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 8 01:24:43.492059 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 8 01:24:43.492064 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 8 01:24:43.492069 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 8 01:24:43.492074 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 8 01:24:43.492079 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 8 01:24:43.492085 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 8 01:24:43.492090 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 8 01:24:43.492095 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 8 01:24:43.492100 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 8 01:24:43.492105 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 8 01:24:43.492110 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 8 01:24:43.492115 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 8 01:24:43.492120 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 8 01:24:43.492125 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 8 01:24:43.492131 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 8 01:24:43.492136 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 8 01:24:43.492141 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 8 01:24:43.492146 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 8 01:24:43.492151 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 8 01:24:43.492156 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 8 01:24:43.492161 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 8 01:24:43.492166 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 8 01:24:43.492171 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 8 01:24:43.492176 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 8 01:24:43.492182 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 8 01:24:43.492187 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 8 01:24:43.492191 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 8 01:24:43.492196 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 8 01:24:43.492201 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 8 01:24:43.492206 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 8 01:24:43.492212 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 8 01:24:43.492217 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 8 01:24:43.492222 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 8 01:24:43.492227 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 8 01:24:43.492232 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 8 01:24:43.492237 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 8 01:24:43.492242 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 8 01:24:43.492247 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 8 01:24:43.492252 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 8 01:24:43.492257 kernel: No NUMA configuration found May 8 01:24:43.492262 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 8 01:24:43.492267 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 8 01:24:43.492273 kernel: Zone ranges: May 8 01:24:43.492278 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 01:24:43.492283 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 01:24:43.492288 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 8 01:24:43.492294 kernel: Movable zone start for each node May 8 01:24:43.492299 kernel: Early memory node ranges May 8 01:24:43.492304 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 8 01:24:43.492309 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 8 01:24:43.492314 kernel: node 0: [mem 0x0000000040400000-0x000000008266dfff] May 8 01:24:43.492320 kernel: node 0: [mem 0x0000000082670000-0x000000008afccfff] May 8 01:24:43.492325 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 8 01:24:43.492330 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 8 01:24:43.492335 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 8 01:24:43.492343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 8 01:24:43.492349 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 01:24:43.492354 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 8 01:24:43.492360 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 8 01:24:43.492366 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 8 01:24:43.492371 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 8 01:24:43.492377 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 8 01:24:43.492382 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 8 01:24:43.492388 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 8 01:24:43.492393 kernel: ACPI: PM-Timer IO Port: 0x1808 May 8 01:24:43.492398 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 01:24:43.492404 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 01:24:43.492409 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 01:24:43.492415 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 01:24:43.492420 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 01:24:43.492426 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 01:24:43.492431 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 01:24:43.492436 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 01:24:43.492441 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 01:24:43.492447 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 01:24:43.492452 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 01:24:43.492457 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 01:24:43.492463 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 01:24:43.492469 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 01:24:43.492474 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 01:24:43.492479 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 01:24:43.492485 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 8 01:24:43.492490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 01:24:43.492500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 01:24:43.492505 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 01:24:43.492510 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 01:24:43.492538 kernel: TSC deadline timer available May 8 01:24:43.492566 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 8 01:24:43.492571 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 8 01:24:43.492596 kernel: Booting paravirtualized kernel on bare hardware May 8 01:24:43.492602 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 01:24:43.492608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 8 01:24:43.492629 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 01:24:43.492634 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 01:24:43.492640 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 8 01:24:43.492646 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 01:24:43.492652 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 01:24:43.492658 kernel: random: crng init done May 8 01:24:43.492663 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 8 01:24:43.492668 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 8 01:24:43.492674 kernel: Fallback order for Node 0: 0 May 8 01:24:43.492679 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 8 01:24:43.492684 kernel: Policy zone: Normal May 8 01:24:43.492690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 01:24:43.492696 kernel: software IO TLB: area num 16. May 8 01:24:43.492702 kernel: Memory: 32718252K/33452980K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 734468K reserved, 0K cma-reserved) May 8 01:24:43.492707 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 8 01:24:43.492713 kernel: ftrace: allocating 37918 entries in 149 pages May 8 01:24:43.492718 kernel: ftrace: allocated 149 pages with 4 groups May 8 01:24:43.492723 kernel: Dynamic Preempt: voluntary May 8 01:24:43.492728 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 01:24:43.492734 kernel: rcu: RCU event tracing is enabled. May 8 01:24:43.492740 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 8 01:24:43.492746 kernel: Trampoline variant of Tasks RCU enabled. May 8 01:24:43.492752 kernel: Rude variant of Tasks RCU enabled. May 8 01:24:43.492757 kernel: Tracing variant of Tasks RCU enabled. May 8 01:24:43.492762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 01:24:43.492768 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 8 01:24:43.492773 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 8 01:24:43.492779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 01:24:43.492784 kernel: Console: colour VGA+ 80x25 May 8 01:24:43.492789 kernel: printk: console [tty0] enabled May 8 01:24:43.492795 kernel: printk: console [ttyS1] enabled May 8 01:24:43.492801 kernel: ACPI: Core revision 20230628 May 8 01:24:43.492806 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 8 01:24:43.492811 kernel: APIC: Switch to symmetric I/O mode setup May 8 01:24:43.492817 kernel: DMAR: Host address width 39 May 8 01:24:43.492822 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 8 01:24:43.492828 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 8 01:24:43.492833 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 8 01:24:43.492839 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 8 01:24:43.492845 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 8 01:24:43.492850 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 8 01:24:43.492856 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 8 01:24:43.492861 kernel: x2apic enabled May 8 01:24:43.492866 kernel: APIC: Switched APIC routing to: cluster x2apic May 8 01:24:43.492872 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 8 01:24:43.492877 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 8 01:24:43.492883 kernel: CPU0: Thermal monitoring enabled (TM1) May 8 01:24:43.492888 kernel: process: using mwait in idle threads May 8 01:24:43.492894 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 01:24:43.492900 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 01:24:43.492905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 01:24:43.492910 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 01:24:43.492915 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 01:24:43.492921 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 01:24:43.492926 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 01:24:43.492931 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 01:24:43.492937 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 01:24:43.492942 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 01:24:43.492947 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 01:24:43.492953 kernel: TAA: Mitigation: TSX disabled May 8 01:24:43.492959 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 8 01:24:43.492964 kernel: SRBDS: Mitigation: Microcode May 8 01:24:43.492969 kernel: GDS: Mitigation: Microcode May 8 01:24:43.492975 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 01:24:43.492980 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 01:24:43.492985 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 01:24:43.492990 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 8 01:24:43.492996 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 8 01:24:43.493001 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 01:24:43.493006 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 8 01:24:43.493012 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 8 01:24:43.493018 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 8 01:24:43.493023 kernel: Freeing SMP alternatives memory: 32K May 8 01:24:43.493028 kernel: pid_max: default: 32768 minimum: 301 May 8 01:24:43.493034 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 01:24:43.493039 kernel: landlock: Up and running. May 8 01:24:43.493044 kernel: SELinux: Initializing. May 8 01:24:43.493050 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 01:24:43.493055 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 01:24:43.493060 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 01:24:43.493066 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 8 01:24:43.493071 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 8 01:24:43.493077 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 8 01:24:43.493083 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 8 01:24:43.493088 kernel: ... version: 4 May 8 01:24:43.493094 kernel: ... bit width: 48 May 8 01:24:43.493099 kernel: ... generic registers: 4 May 8 01:24:43.493104 kernel: ... value mask: 0000ffffffffffff May 8 01:24:43.493110 kernel: ... max period: 00007fffffffffff May 8 01:24:43.493115 kernel: ... fixed-purpose events: 3 May 8 01:24:43.493120 kernel: ... event mask: 000000070000000f May 8 01:24:43.493127 kernel: signal: max sigframe size: 2032 May 8 01:24:43.493132 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 8 01:24:43.493138 kernel: rcu: Hierarchical SRCU implementation. May 8 01:24:43.493143 kernel: rcu: Max phase no-delay instances is 400. May 8 01:24:43.493148 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 8 01:24:43.493154 kernel: smp: Bringing up secondary CPUs ... May 8 01:24:43.493159 kernel: smpboot: x86: Booting SMP configuration: May 8 01:24:43.493164 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 8 01:24:43.493170 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 8 01:24:43.493176 kernel: smp: Brought up 1 node, 16 CPUs May 8 01:24:43.493182 kernel: smpboot: Max logical packages: 1 May 8 01:24:43.493187 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 8 01:24:43.493193 kernel: devtmpfs: initialized May 8 01:24:43.493198 kernel: x86/mm: Memory block size: 128MB May 8 01:24:43.493204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8266e000-0x8266efff] (4096 bytes) May 8 01:24:43.493209 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 8 01:24:43.493215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 01:24:43.493221 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 8 01:24:43.493226 kernel: pinctrl core: initialized pinctrl subsystem May 8 01:24:43.493232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 01:24:43.493237 kernel: audit: initializing netlink subsys (disabled) May 8 01:24:43.493243 kernel: audit: type=2000 audit(1746667477.040:1): state=initialized audit_enabled=0 res=1 May 8 01:24:43.493248 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 01:24:43.493253 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 01:24:43.493258 kernel: cpuidle: using governor menu May 8 01:24:43.493264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 01:24:43.493270 kernel: dca service started, version 1.12.1 May 8 01:24:43.493275 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 8 01:24:43.493281 kernel: PCI: Using configuration type 1 for base access May 8 01:24:43.493286 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 8 01:24:43.493292 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 01:24:43.493297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 01:24:43.493302 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 01:24:43.493308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 01:24:43.493313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 01:24:43.493319 kernel: ACPI: Added _OSI(Module Device) May 8 01:24:43.493325 kernel: ACPI: Added _OSI(Processor Device) May 8 01:24:43.493330 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 01:24:43.493335 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 01:24:43.493341 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 8 01:24:43.493346 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493351 kernel: ACPI: SSDT 0xFFFF9132C0E3A000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 8 01:24:43.493357 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493362 kernel: ACPI: SSDT 0xFFFF9132C1E0E800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 8 01:24:43.493369 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493374 kernel: ACPI: SSDT 0xFFFF9132C0DE5300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 8 01:24:43.493379 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493385 kernel: ACPI: SSDT 0xFFFF9132C1E0A800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 8 01:24:43.493390 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493395 kernel: ACPI: SSDT 0xFFFF9132C0E55000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 8 01:24:43.493400 kernel: ACPI: Dynamic OEM Table Load: May 8 01:24:43.493406 kernel: ACPI: SSDT 0xFFFF9132C154A000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 8 01:24:43.493411 kernel: ACPI: _OSC evaluated successfully for all CPUs May 8 01:24:43.493416 kernel: ACPI: Interpreter enabled May 8 01:24:43.493423 kernel: ACPI: PM: (supports S0 S5) May 8 01:24:43.493428 kernel: ACPI: Using IOAPIC for interrupt routing May 8 01:24:43.493433 kernel: HEST: Enabling Firmware First mode for corrected errors. May 8 01:24:43.493439 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 8 01:24:43.493444 kernel: HEST: Table parsing has been initialized. May 8 01:24:43.493449 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 8 01:24:43.493455 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 01:24:43.493460 kernel: PCI: Ignoring E820 reservations for host bridge windows May 8 01:24:43.493466 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 8 01:24:43.493472 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 8 01:24:43.493477 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 8 01:24:43.493483 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 8 01:24:43.493488 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 8 01:24:43.493505 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 8 01:24:43.493511 kernel: ACPI: \_TZ_.FN00: New power resource May 8 01:24:43.493537 kernel: ACPI: \_TZ_.FN01: New power resource May 8 01:24:43.493543 kernel: ACPI: \_TZ_.FN02: New power resource May 8 01:24:43.493548 kernel: ACPI: \_TZ_.FN03: New power resource May 8 01:24:43.493569 kernel: ACPI: \_TZ_.FN04: New power resource May 8 01:24:43.493575 kernel: ACPI: \PIN_: New power resource May 8 01:24:43.493580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 8 01:24:43.493655 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 01:24:43.493706 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 8 01:24:43.493752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 8 01:24:43.493760 kernel: PCI host bridge to bus 0000:00 May 8 01:24:43.493812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 01:24:43.493855 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 01:24:43.493896 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 01:24:43.493938 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 8 01:24:43.493979 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 8 01:24:43.494020 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 8 01:24:43.494079 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 8 01:24:43.494142 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 8 01:24:43.494192 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 8 01:24:43.494244 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 8 01:24:43.494292 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 8 01:24:43.494344 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 8 01:24:43.494392 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 8 01:24:43.494445 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 8 01:24:43.494496 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 8 01:24:43.494587 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 8 01:24:43.494639 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 8 01:24:43.494686 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 8 01:24:43.494734 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 8 01:24:43.494785 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 8 01:24:43.494834 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 8 01:24:43.494887 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 8 01:24:43.494934 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 8 01:24:43.494985 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 8 01:24:43.495032 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 8 01:24:43.495081 kernel: pci 0000:00:16.0: PME# supported from D3hot May 8 01:24:43.495133 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 8 01:24:43.495188 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 8 01:24:43.495238 kernel: pci 0000:00:16.1: PME# supported from D3hot May 8 01:24:43.495289 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 8 01:24:43.495337 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 8 01:24:43.495383 kernel: pci 0000:00:16.4: PME# supported from D3hot May 8 01:24:43.495438 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 8 01:24:43.495485 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 8 01:24:43.495602 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 8 01:24:43.495649 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 8 01:24:43.495696 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 8 01:24:43.495743 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 8 01:24:43.495789 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 8 01:24:43.495839 kernel: pci 0000:00:17.0: PME# supported from D3hot May 8 01:24:43.495891 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 8 01:24:43.495940 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 8 01:24:43.495995 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 8 01:24:43.496046 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 8 01:24:43.496098 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 8 01:24:43.496146 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 8 01:24:43.496199 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 8 01:24:43.496247 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 8 01:24:43.496300 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 8 01:24:43.496351 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 8 01:24:43.496403 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 8 01:24:43.496450 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 8 01:24:43.496506 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 8 01:24:43.496601 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 8 01:24:43.496649 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 8 01:24:43.496699 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 8 01:24:43.496750 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 8 01:24:43.496799 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 8 01:24:43.496853 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 8 01:24:43.496903 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 8 01:24:43.496952 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 8 01:24:43.497002 kernel: pci 0000:01:00.0: PME# supported from D3cold May 8 01:24:43.497051 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 8 01:24:43.497099 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 8 01:24:43.497153 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 8 01:24:43.497203 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 8 01:24:43.497251 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 8 01:24:43.497300 kernel: pci 0000:01:00.1: PME# supported from D3cold May 8 01:24:43.497351 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 8 01:24:43.497401 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 8 01:24:43.497449 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 01:24:43.497498 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 8 01:24:43.497589 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 8 01:24:43.497636 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 8 01:24:43.497689 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 8 01:24:43.497738 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 8 01:24:43.497790 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 8 01:24:43.497839 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 8 01:24:43.497886 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 8 01:24:43.497936 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 01:24:43.497984 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 8 01:24:43.498032 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 8 01:24:43.498079 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 8 01:24:43.498138 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 8 01:24:43.498188 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 8 01:24:43.498236 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 8 01:24:43.498285 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 8 01:24:43.498333 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 8 01:24:43.498382 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 8 01:24:43.498432 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 8 01:24:43.498478 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 8 01:24:43.498580 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 8 01:24:43.498653 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 8 01:24:43.498706 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 8 01:24:43.498756 kernel: pci 0000:06:00.0: enabling Extended Tags May 8 01:24:43.498805 kernel: pci 0000:06:00.0: supports D1 D2 May 8 01:24:43.498877 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 01:24:43.498926 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 8 01:24:43.498977 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 8 01:24:43.499044 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 8 01:24:43.499117 kernel: pci_bus 0000:07: extended config space not accessible May 8 01:24:43.499176 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 8 01:24:43.499229 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 8 01:24:43.499282 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 8 01:24:43.499333 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 8 01:24:43.499387 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 01:24:43.499439 kernel: pci 0000:07:00.0: supports D1 D2 May 8 01:24:43.499490 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 01:24:43.499578 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 8 01:24:43.499629 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 8 01:24:43.499679 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 8 01:24:43.499687 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 8 01:24:43.499694 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 8 01:24:43.499701 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 8 01:24:43.499707 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 8 01:24:43.499713 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 8 01:24:43.499719 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 8 01:24:43.499725 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 8 01:24:43.499730 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 8 01:24:43.499736 kernel: iommu: Default domain type: Translated May 8 01:24:43.499742 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 01:24:43.499748 kernel: PCI: Using ACPI for IRQ routing May 8 01:24:43.499755 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 01:24:43.499760 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 8 01:24:43.499766 kernel: e820: reserve RAM buffer [mem 0x8266e000-0x83ffffff] May 8 01:24:43.499772 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 8 01:24:43.499777 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 8 01:24:43.499783 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 8 01:24:43.499788 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 8 01:24:43.499840 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 8 01:24:43.499892 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 8 01:24:43.499946 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 01:24:43.499955 kernel: vgaarb: loaded May 8 01:24:43.499961 kernel: clocksource: Switched to clocksource tsc-early May 8 01:24:43.499967 kernel: VFS: Disk quotas dquot_6.6.0 May 8 01:24:43.499973 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 01:24:43.499979 kernel: pnp: PnP ACPI init May 8 01:24:43.500029 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 8 01:24:43.500078 kernel: pnp 00:02: [dma 0 disabled] May 8 01:24:43.500130 kernel: pnp 00:03: [dma 0 disabled] May 8 01:24:43.500180 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 8 01:24:43.500225 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 8 01:24:43.500273 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 8 01:24:43.500321 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 8 01:24:43.500365 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 8 01:24:43.500412 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 8 01:24:43.500457 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 8 01:24:43.500532 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 8 01:24:43.500577 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 8 01:24:43.500621 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 8 01:24:43.500664 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 8 01:24:43.500713 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 8 01:24:43.500760 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 8 01:24:43.500805 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 8 01:24:43.500848 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 8 01:24:43.500892 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 8 01:24:43.500934 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 8 01:24:43.500979 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 8 01:24:43.501027 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 8 01:24:43.501037 kernel: pnp: PnP ACPI: found 10 devices May 8 01:24:43.501043 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 01:24:43.501049 kernel: NET: Registered PF_INET protocol family May 8 01:24:43.501055 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 01:24:43.501061 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 8 01:24:43.501067 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 01:24:43.501073 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 01:24:43.501080 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 01:24:43.501086 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 8 01:24:43.501092 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 01:24:43.501098 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 01:24:43.501104 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 01:24:43.501110 kernel: NET: Registered PF_XDP protocol family May 8 01:24:43.501160 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 8 01:24:43.501209 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 8 01:24:43.501259 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 8 01:24:43.501310 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 8 01:24:43.501362 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 8 01:24:43.501413 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 8 01:24:43.501464 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 8 01:24:43.501517 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 01:24:43.501568 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 8 01:24:43.501616 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 8 01:24:43.501666 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 8 01:24:43.501717 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 8 01:24:43.501767 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 8 01:24:43.501815 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 8 01:24:43.501864 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 8 01:24:43.501911 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 8 01:24:43.501963 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 8 01:24:43.502013 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 8 01:24:43.502062 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 8 01:24:43.502112 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 8 01:24:43.502161 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 8 01:24:43.502210 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 8 01:24:43.502258 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 8 01:24:43.502307 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 8 01:24:43.502352 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 8 01:24:43.502397 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 01:24:43.502440 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 01:24:43.502483 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 01:24:43.502529 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 8 01:24:43.502610 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 8 01:24:43.502660 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 8 01:24:43.502705 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 8 01:24:43.502759 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 8 01:24:43.502803 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 8 01:24:43.502853 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 8 01:24:43.502899 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 8 01:24:43.502947 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 8 01:24:43.502992 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 8 01:24:43.503041 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 8 01:24:43.503088 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 8 01:24:43.503097 kernel: PCI: CLS 64 bytes, default 64 May 8 01:24:43.503103 kernel: DMAR: No ATSR found May 8 01:24:43.503109 kernel: DMAR: No SATC found May 8 01:24:43.503115 kernel: DMAR: dmar0: Using Queued invalidation May 8 01:24:43.503163 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 8 01:24:43.503214 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 8 01:24:43.503263 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 8 01:24:43.503314 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 8 01:24:43.503363 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 8 01:24:43.503411 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 8 01:24:43.503460 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 8 01:24:43.503510 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 8 01:24:43.503560 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 8 01:24:43.503608 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 8 01:24:43.503657 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 8 01:24:43.503707 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 8 01:24:43.503756 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 8 01:24:43.503805 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 8 01:24:43.503853 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 8 01:24:43.503903 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 8 01:24:43.503951 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 8 01:24:43.503999 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 8 01:24:43.504048 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 8 01:24:43.504099 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 8 01:24:43.504163 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 8 01:24:43.504212 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 8 01:24:43.504261 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 8 01:24:43.504311 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 8 01:24:43.504360 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 8 01:24:43.504409 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 8 01:24:43.504459 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 8 01:24:43.504469 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 8 01:24:43.504475 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 01:24:43.504481 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 8 01:24:43.504487 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 8 01:24:43.504492 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 8 01:24:43.504500 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 8 01:24:43.504506 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 8 01:24:43.504596 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 8 01:24:43.504607 kernel: Initialise system trusted keyrings May 8 01:24:43.504613 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 8 01:24:43.504618 kernel: Key type asymmetric registered May 8 01:24:43.504624 kernel: Asymmetric key parser 'x509' registered May 8 01:24:43.504630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 01:24:43.504635 kernel: io scheduler mq-deadline registered May 8 01:24:43.504641 kernel: io scheduler kyber registered May 8 01:24:43.504647 kernel: io scheduler bfq registered May 8 01:24:43.504694 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 8 01:24:43.504745 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 8 01:24:43.504794 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 8 01:24:43.504842 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 8 01:24:43.504890 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 8 01:24:43.504938 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 8 01:24:43.504991 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 8 01:24:43.505000 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 8 01:24:43.505006 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 8 01:24:43.505014 kernel: pstore: Using crash dump compression: deflate May 8 01:24:43.505019 kernel: pstore: Registered erst as persistent store backend May 8 01:24:43.505025 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 01:24:43.505031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 01:24:43.505037 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 01:24:43.505043 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 8 01:24:43.505048 kernel: hpet_acpi_add: no address or irqs in _CRS May 8 01:24:43.505099 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 8 01:24:43.505109 kernel: i8042: PNP: No PS/2 controller found. May 8 01:24:43.505153 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 8 01:24:43.505198 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 8 01:24:43.505242 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-08T01:24:42 UTC (1746667482) May 8 01:24:43.505286 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 8 01:24:43.505294 kernel: intel_pstate: Intel P-state driver initializing May 8 01:24:43.505300 kernel: intel_pstate: Disabling energy efficiency optimization May 8 01:24:43.505306 kernel: intel_pstate: HWP enabled May 8 01:24:43.505313 kernel: NET: Registered PF_INET6 protocol family May 8 01:24:43.505319 kernel: Segment Routing with IPv6 May 8 01:24:43.505324 kernel: In-situ OAM (IOAM) with IPv6 May 8 01:24:43.505331 kernel: NET: Registered PF_PACKET protocol family May 8 01:24:43.505336 kernel: Key type dns_resolver registered May 8 01:24:43.505342 kernel: microcode: Current revision: 0x00000102 May 8 01:24:43.505348 kernel: microcode: Microcode Update Driver: v2.2. May 8 01:24:43.505353 kernel: IPI shorthand broadcast: enabled May 8 01:24:43.505359 kernel: sched_clock: Marking stable (2496000679, 1442204298)->(4501598261, -563393284) May 8 01:24:43.505366 kernel: registered taskstats version 1 May 8 01:24:43.505371 kernel: Loading compiled-in X.509 certificates May 8 01:24:43.505377 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 01:24:43.505383 kernel: Key type .fscrypt registered May 8 01:24:43.505388 kernel: Key type fscrypt-provisioning registered May 8 01:24:43.505394 kernel: ima: Allocated hash algorithm: sha1 May 8 01:24:43.505399 kernel: ima: No architecture policies found May 8 01:24:43.505405 kernel: clk: Disabling unused clocks May 8 01:24:43.505411 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 01:24:43.505418 kernel: Write protecting the kernel read-only data: 38912k May 8 01:24:43.505423 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 01:24:43.505429 kernel: Run /init as init process May 8 01:24:43.505435 kernel: with arguments: May 8 01:24:43.505440 kernel: /init May 8 01:24:43.505446 kernel: with environment: May 8 01:24:43.505451 kernel: HOME=/ May 8 01:24:43.505457 kernel: TERM=linux May 8 01:24:43.505462 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 01:24:43.505469 systemd[1]: Successfully made /usr/ read-only. May 8 01:24:43.505477 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 01:24:43.505484 systemd[1]: Detected architecture x86-64. May 8 01:24:43.505489 systemd[1]: Running in initrd. May 8 01:24:43.505497 systemd[1]: No hostname configured, using default hostname. May 8 01:24:43.505503 systemd[1]: Hostname set to . May 8 01:24:43.505509 systemd[1]: Initializing machine ID from random generator. May 8 01:24:43.505538 systemd[1]: Queued start job for default target initrd.target. May 8 01:24:43.505544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 01:24:43.505551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 01:24:43.505571 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 01:24:43.505577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 01:24:43.505583 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 01:24:43.505589 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 01:24:43.505597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 01:24:43.505603 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 01:24:43.505609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 01:24:43.505615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 01:24:43.505621 systemd[1]: Reached target paths.target - Path Units. May 8 01:24:43.505627 systemd[1]: Reached target slices.target - Slice Units. May 8 01:24:43.505633 systemd[1]: Reached target swap.target - Swaps. May 8 01:24:43.505639 systemd[1]: Reached target timers.target - Timer Units. May 8 01:24:43.505646 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 01:24:43.505652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 01:24:43.505658 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 01:24:43.505664 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 01:24:43.505670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 01:24:43.505676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 01:24:43.505682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 01:24:43.505688 systemd[1]: Reached target sockets.target - Socket Units. May 8 01:24:43.505694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 01:24:43.505701 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz May 8 01:24:43.505706 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns May 8 01:24:43.505712 kernel: clocksource: Switched to clocksource tsc May 8 01:24:43.505718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 01:24:43.505724 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 01:24:43.505730 systemd[1]: Starting systemd-fsck-usr.service... May 8 01:24:43.505736 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 01:24:43.505753 systemd-journald[268]: Collecting audit messages is disabled. May 8 01:24:43.505768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 01:24:43.505774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 01:24:43.505781 systemd-journald[268]: Journal started May 8 01:24:43.505795 systemd-journald[268]: Runtime Journal (/run/log/journal/970408b61fd44f3885141e2d9ee1362e) is 8M, max 639.9M, 631.9M free. May 8 01:24:43.527508 systemd-modules-load[270]: Inserted module 'overlay' May 8 01:24:43.536499 systemd[1]: Started systemd-journald.service - Journal Service. May 8 01:24:43.536858 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 01:24:43.575511 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 01:24:43.575544 kernel: Bridge firewalling registered May 8 01:24:43.553730 systemd-modules-load[270]: Inserted module 'br_netfilter' May 8 01:24:43.575650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 01:24:43.612850 systemd[1]: Finished systemd-fsck-usr.service. May 8 01:24:43.632847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 01:24:43.649920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 01:24:43.686961 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 01:24:43.688626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 01:24:43.690243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 01:24:43.691775 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 01:24:43.696535 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 01:24:43.697190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 01:24:43.697297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 01:24:43.697833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 01:24:43.698671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 01:24:43.717384 systemd-resolved[303]: Positive Trust Anchors: May 8 01:24:43.717389 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 01:24:43.717414 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 01:24:43.718956 systemd-resolved[303]: Defaulting to hostname 'linux'. May 8 01:24:43.719828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 01:24:43.748853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 01:24:43.813381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 01:24:43.834808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 01:24:43.870893 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 01:24:43.938283 dracut-cmdline[308]: dracut-dracut-053 May 8 01:24:43.945730 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 01:24:44.122535 kernel: SCSI subsystem initialized May 8 01:24:44.136527 kernel: Loading iSCSI transport class v2.0-870. May 8 01:24:44.149583 kernel: iscsi: registered transport (tcp) May 8 01:24:44.170119 kernel: iscsi: registered transport (qla4xxx) May 8 01:24:44.170139 kernel: QLogic iSCSI HBA Driver May 8 01:24:44.193086 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 01:24:44.215759 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 01:24:44.255160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 01:24:44.255192 kernel: device-mapper: uevent: version 1.0.3 May 8 01:24:44.263929 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 01:24:44.300534 kernel: raid6: avx2x4 gen() 47198 MB/s May 8 01:24:44.321525 kernel: raid6: avx2x2 gen() 53737 MB/s May 8 01:24:44.347644 kernel: raid6: avx2x1 gen() 45109 MB/s May 8 01:24:44.347662 kernel: raid6: using algorithm avx2x2 gen() 53737 MB/s May 8 01:24:44.374685 kernel: raid6: .... xor() 32416 MB/s, rmw enabled May 8 01:24:44.374702 kernel: raid6: using avx2x2 recovery algorithm May 8 01:24:44.394554 kernel: xor: automatically using best checksumming function avx May 8 01:24:44.493541 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 01:24:44.498653 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 01:24:44.523805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 01:24:44.531542 systemd-udevd[493]: Using default interface naming scheme 'v255'. May 8 01:24:44.534781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 01:24:44.562643 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 01:24:44.616838 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation May 8 01:24:44.633865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 01:24:44.658761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 01:24:44.745459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 01:24:44.769687 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 01:24:44.769741 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 01:24:44.770584 kernel: cryptd: max_cpu_qlen set to 1000 May 8 01:24:44.782500 kernel: PTP clock support registered May 8 01:24:44.783567 kernel: libata version 3.00 loaded. May 8 01:24:44.792509 kernel: ahci 0000:00:17.0: version 3.0 May 8 01:24:44.903440 kernel: AVX2 version of gcm_enc/dec engaged. May 8 01:24:44.903461 kernel: ACPI: bus type USB registered May 8 01:24:44.903471 kernel: usbcore: registered new interface driver usbfs May 8 01:24:44.903490 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 8 01:24:44.903579 kernel: AES CTR mode by8 optimization enabled May 8 01:24:44.903587 kernel: usbcore: registered new interface driver hub May 8 01:24:44.903598 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 8 01:24:44.903665 kernel: usbcore: registered new device driver usb May 8 01:24:44.903673 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 8 01:24:44.903680 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 8 01:24:44.903687 kernel: scsi host0: ahci May 8 01:24:44.903752 kernel: scsi host1: ahci May 8 01:24:44.903810 kernel: scsi host2: ahci May 8 01:24:44.903869 kernel: scsi host3: ahci May 8 01:24:44.903927 kernel: scsi host4: ahci May 8 01:24:44.903986 kernel: scsi host5: ahci May 8 01:24:44.904043 kernel: igb 0000:03:00.0: added PHC on eth0 May 8 01:24:44.981293 kernel: scsi host6: ahci May 8 01:24:44.981390 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 8 01:24:44.981488 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 8 01:24:44.981510 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7a May 8 01:24:44.981613 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 8 01:24:44.981629 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 8 01:24:44.981640 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 8 01:24:44.981649 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 8 01:24:44.981660 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 8 01:24:44.981671 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 8 01:24:44.981682 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 8 01:24:44.981789 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 8 01:24:44.854703 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 01:24:45.024757 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 May 8 01:24:45.486396 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 8 01:24:45.486479 kernel: igb 0000:04:00.0: added PHC on eth1 May 8 01:24:45.486554 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 8 01:24:45.486619 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7b May 8 01:24:45.486686 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 8 01:24:45.486751 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 8 01:24:45.486813 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 01:24:45.486821 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 01:24:45.486829 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 01:24:45.486836 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 8 01:24:45.486843 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 8 01:24:45.486850 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 8 01:24:45.486858 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 01:24:45.486867 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 8 01:24:45.486875 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 8 01:24:45.486940 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 8 01:24:45.486948 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 8 01:24:45.487011 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 8 01:24:45.487019 kernel: ata2.00: Features: NCQ-prio May 8 01:24:45.487026 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 8 01:24:45.487034 kernel: ata1.00: Features: NCQ-prio May 8 01:24:45.487041 kernel: ata2.00: configured for UDMA/133 May 8 01:24:45.487050 kernel: ata1.00: configured for UDMA/133 May 8 01:24:45.487058 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 8 01:24:45.487129 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 8 01:24:45.580677 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 8 01:24:45.580790 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 8 01:24:45.580902 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 8 01:24:45.581002 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 8 01:24:45.581097 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 8 01:24:45.581187 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 8 01:24:45.581252 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 8 01:24:45.581313 kernel: hub 1-0:1.0: USB hub found May 8 01:24:45.581391 kernel: hub 1-0:1.0: 16 ports detected May 8 01:24:45.581458 kernel: hub 2-0:1.0: USB hub found May 8 01:24:45.581540 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 8 01:24:45.581609 kernel: hub 2-0:1.0: 10 ports detected May 8 01:24:45.581677 kernel: ata2.00: Enabling discard_zeroes_data May 8 01:24:45.581685 kernel: ata1.00: Enabling discard_zeroes_data May 8 01:24:45.581692 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 8 01:24:45.581755 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 8 01:24:45.581816 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 8 01:24:45.581874 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 8 01:24:45.581933 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 01:24:45.581993 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 8 01:24:45.582053 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 01:24:45.582112 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 8 01:24:45.582170 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 8 01:24:45.582229 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 8 01:24:45.582288 kernel: ata1.00: Enabling discard_zeroes_data May 8 01:24:45.582296 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 01:24:45.582362 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 01:24:45.582423 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 01:24:45.582431 kernel: GPT:9289727 != 937703087 May 8 01:24:45.582438 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 01:24:45.582446 kernel: GPT:9289727 != 937703087 May 8 01:24:45.582452 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 01:24:45.582459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 01:24:45.582466 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 01:24:45.582536 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 May 8 01:24:46.047815 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 8 01:24:46.048216 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 8 01:24:46.048597 kernel: ata2.00: Enabling discard_zeroes_data May 8 01:24:46.048641 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 8 01:24:46.048963 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (546) May 8 01:24:46.049008 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (695) May 8 01:24:46.049045 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 8 01:24:46.179577 kernel: ata1.00: Enabling discard_zeroes_data May 8 01:24:46.179632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 01:24:46.179671 kernel: hub 1-14:1.0: USB hub found May 8 01:24:46.180117 kernel: hub 1-14:1.0: 4 ports detected May 8 01:24:46.180486 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 8 01:24:46.180851 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 8 01:24:46.181180 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 01:24:46.181532 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 May 8 01:24:46.181875 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 May 8 01:24:46.182199 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 8 01:24:44.880061 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 01:24:44.948636 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 01:24:45.005755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 01:24:46.223433 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 01:24:46.223455 kernel: usbcore: registered new interface driver usbhid May 8 01:24:45.056659 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 01:24:46.250591 kernel: usbhid: USB HID core driver May 8 01:24:46.250607 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 8 01:24:45.066636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 01:24:45.066709 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 01:24:45.078649 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 01:24:46.283673 disk-uuid[716]: Primary Header is updated. May 8 01:24:46.283673 disk-uuid[716]: Secondary Entries is updated. May 8 01:24:46.283673 disk-uuid[716]: Secondary Header is updated. May 8 01:24:46.338652 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 8 01:24:46.338798 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 8 01:24:46.338814 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 8 01:24:45.096672 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 01:24:45.106592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 01:24:45.106665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 01:24:45.117652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 01:24:45.142677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 01:24:45.153066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 01:24:45.174840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 01:24:45.200669 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 01:24:45.214608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 01:24:45.623420 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. May 8 01:24:45.646319 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. May 8 01:24:45.670261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. May 8 01:24:45.687138 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. May 8 01:24:45.698585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. May 8 01:24:45.720647 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 01:24:46.746028 kernel: ata1.00: Enabling discard_zeroes_data May 8 01:24:46.754105 disk-uuid[717]: The operation has completed successfully. May 8 01:24:46.762591 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 01:24:46.788627 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 01:24:46.788675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 01:24:46.843678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 01:24:46.868613 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 01:24:46.868671 sh[747]: Success May 8 01:24:46.904392 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 01:24:46.929435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 01:24:46.937774 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 01:24:46.992035 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 01:24:46.992064 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 01:24:47.001644 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 01:24:47.008658 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 01:24:47.014516 kernel: BTRFS info (device dm-0): using free space tree May 8 01:24:47.028548 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 01:24:47.029076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 01:24:47.038976 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 01:24:47.083981 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 01:24:47.083993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 01:24:47.084001 kernel: BTRFS info (device sda6): using free space tree May 8 01:24:47.046806 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 01:24:47.102245 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 01:24:47.102262 kernel: BTRFS info (device sda6): auto enabling async discard May 8 01:24:47.119503 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 01:24:47.131829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 01:24:47.142874 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 01:24:47.180805 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 01:24:47.192611 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 01:24:47.225628 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 01:24:47.239368 systemd-networkd[927]: lo: Link UP May 8 01:24:47.243303 ignition[897]: Ignition 2.20.0 May 8 01:24:47.239370 systemd-networkd[927]: lo: Gained carrier May 8 01:24:47.243308 ignition[897]: Stage: fetch-offline May 8 01:24:47.241991 systemd-networkd[927]: Enumeration completed May 8 01:24:47.243329 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 8 01:24:47.242087 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 01:24:47.243335 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:24:47.242585 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 01:24:47.243382 ignition[897]: parsed url from cmdline: "" May 8 01:24:47.245741 unknown[897]: fetched base config from "system" May 8 01:24:47.243384 ignition[897]: no config URL provided May 8 01:24:47.245745 unknown[897]: fetched user config from "system" May 8 01:24:47.243386 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" May 8 01:24:47.253938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 01:24:47.243408 ignition[897]: parsing config with SHA512: a320d282668dc05c601255518963dba7e2a6aa71af0e5124a467da9f4ac44e37bf02b69b0e967f74891ed370adb44a9bd00027cc254255f4b6e78e94211234e7 May 8 01:24:47.270933 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 01:24:47.246360 ignition[897]: fetch-offline: fetch-offline passed May 8 01:24:47.273175 systemd[1]: Reached target network.target - Network. May 8 01:24:47.246364 ignition[897]: POST message to Packet Timeline May 8 01:24:47.287702 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 01:24:47.246371 ignition[897]: POST Status error: resource requires networking May 8 01:24:47.296726 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 01:24:47.474636 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 8 01:24:47.246654 ignition[897]: Ignition finished successfully May 8 01:24:47.298959 systemd-networkd[927]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 01:24:47.310190 ignition[942]: Ignition 2.20.0 May 8 01:24:47.471174 systemd-networkd[927]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 01:24:47.310195 ignition[942]: Stage: kargs May 8 01:24:47.310296 ignition[942]: no configs at "/usr/lib/ignition/base.d" May 8 01:24:47.310302 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:24:47.310833 ignition[942]: kargs: kargs passed May 8 01:24:47.310835 ignition[942]: POST message to Packet Timeline May 8 01:24:47.310847 ignition[942]: GET https://metadata.packet.net/metadata: attempt #1 May 8 01:24:47.311279 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41889->[::1]:53: read: connection refused May 8 01:24:47.512137 ignition[942]: GET https://metadata.packet.net/metadata: attempt #2 May 8 01:24:47.513403 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34555->[::1]:53: read: connection refused May 8 01:24:47.675536 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 8 01:24:47.677018 systemd-networkd[927]: eno1: Link UP May 8 01:24:47.677171 systemd-networkd[927]: eno2: Link UP May 8 01:24:47.677309 systemd-networkd[927]: enp1s0f0np0: Link UP May 8 01:24:47.677474 systemd-networkd[927]: enp1s0f0np0: Gained carrier May 8 01:24:47.685770 systemd-networkd[927]: enp1s0f1np1: Link UP May 8 01:24:47.724766 systemd-networkd[927]: enp1s0f0np0: DHCPv4 address 145.40.90.133/31, gateway 145.40.90.132 acquired from 145.40.83.140 May 8 01:24:47.914778 ignition[942]: GET https://metadata.packet.net/metadata: attempt #3 May 8 01:24:47.916380 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57311->[::1]:53: read: connection refused May 8 01:24:48.484294 systemd-networkd[927]: enp1s0f1np1: Gained carrier May 8 01:24:48.716958 ignition[942]: GET https://metadata.packet.net/metadata: attempt #4 May 8 01:24:48.718158 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33238->[::1]:53: read: connection refused May 8 01:24:48.932114 systemd-networkd[927]: enp1s0f0np0: Gained IPv6LL May 8 01:24:50.148079 systemd-networkd[927]: enp1s0f1np1: Gained IPv6LL May 8 01:24:50.319791 ignition[942]: GET https://metadata.packet.net/metadata: attempt #5 May 8 01:24:50.320915 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35531->[::1]:53: read: connection refused May 8 01:24:53.524530 ignition[942]: GET https://metadata.packet.net/metadata: attempt #6 May 8 01:24:54.908758 ignition[942]: GET result: OK May 8 01:24:55.424263 ignition[942]: Ignition finished successfully May 8 01:24:55.429517 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 01:24:55.456758 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 01:24:55.462997 ignition[962]: Ignition 2.20.0 May 8 01:24:55.463002 ignition[962]: Stage: disks May 8 01:24:55.463111 ignition[962]: no configs at "/usr/lib/ignition/base.d" May 8 01:24:55.463118 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:24:55.463680 ignition[962]: disks: disks passed May 8 01:24:55.463683 ignition[962]: POST message to Packet Timeline May 8 01:24:55.463696 ignition[962]: GET https://metadata.packet.net/metadata: attempt #1 May 8 01:24:56.489988 ignition[962]: GET result: OK May 8 01:24:56.933358 ignition[962]: Ignition finished successfully May 8 01:24:56.936890 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 01:24:56.951735 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 01:24:56.969782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 01:24:56.990790 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 01:24:57.011823 systemd[1]: Reached target sysinit.target - System Initialization. May 8 01:24:57.032810 systemd[1]: Reached target basic.target - Basic System. May 8 01:24:57.062773 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 01:24:57.096508 systemd-fsck[978]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 01:24:57.106924 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 01:24:57.135754 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 01:24:57.206539 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 01:24:57.206633 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 01:24:57.216004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 01:24:57.257749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 01:24:57.305555 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (987) May 8 01:24:57.305569 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 01:24:57.305578 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 01:24:57.305585 kernel: BTRFS info (device sda6): using free space tree May 8 01:24:57.267289 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 01:24:57.335745 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 01:24:57.335757 kernel: BTRFS info (device sda6): auto enabling async discard May 8 01:24:57.339021 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 01:24:57.351390 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 8 01:24:57.361754 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 01:24:57.361784 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 01:24:57.420746 coreos-metadata[1004]: May 08 01:24:57.414 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 01:24:57.411687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 01:24:57.459715 coreos-metadata[1005]: May 08 01:24:57.414 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 01:24:57.429803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 01:24:57.461776 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 01:24:57.500673 initrd-setup-root[1019]: cut: /sysroot/etc/passwd: No such file or directory May 8 01:24:57.510631 initrd-setup-root[1026]: cut: /sysroot/etc/group: No such file or directory May 8 01:24:57.520614 initrd-setup-root[1033]: cut: /sysroot/etc/shadow: No such file or directory May 8 01:24:57.530615 initrd-setup-root[1040]: cut: /sysroot/etc/gshadow: No such file or directory May 8 01:24:57.547527 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 01:24:57.568772 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 01:24:57.593772 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 01:24:57.569349 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 01:24:57.602469 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 01:24:57.623616 ignition[1107]: INFO : Ignition 2.20.0 May 8 01:24:57.623616 ignition[1107]: INFO : Stage: mount May 8 01:24:57.637734 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 01:24:57.637734 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:24:57.637734 ignition[1107]: INFO : mount: mount passed May 8 01:24:57.637734 ignition[1107]: INFO : POST message to Packet Timeline May 8 01:24:57.637734 ignition[1107]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 01:24:57.632078 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 01:24:58.461405 coreos-metadata[1005]: May 08 01:24:58.461 INFO Fetch successful May 8 01:24:58.541367 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 8 01:24:58.541426 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 8 01:24:58.576795 ignition[1107]: INFO : GET result: OK May 8 01:24:58.804079 coreos-metadata[1004]: May 08 01:24:58.803 INFO Fetch successful May 8 01:24:58.883065 coreos-metadata[1004]: May 08 01:24:58.883 INFO wrote hostname ci-4230.1.1-n-cd63e3b163 to /sysroot/etc/hostname May 8 01:24:58.884387 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 01:24:59.288318 ignition[1107]: INFO : Ignition finished successfully May 8 01:24:59.291083 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 01:24:59.325715 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 01:24:59.336348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 01:24:59.384514 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1132) May 8 01:24:59.401970 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 01:24:59.401985 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 01:24:59.407872 kernel: BTRFS info (device sda6): using free space tree May 8 01:24:59.423257 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 01:24:59.423273 kernel: BTRFS info (device sda6): auto enabling async discard May 8 01:24:59.425111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 01:24:59.457064 ignition[1149]: INFO : Ignition 2.20.0 May 8 01:24:59.457064 ignition[1149]: INFO : Stage: files May 8 01:24:59.472707 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 01:24:59.472707 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:24:59.472707 ignition[1149]: DEBUG : files: compiled without relabeling support, skipping May 8 01:24:59.472707 ignition[1149]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 01:24:59.472707 ignition[1149]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 01:24:59.472707 ignition[1149]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 01:24:59.472707 ignition[1149]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 01:24:59.472707 ignition[1149]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 01:24:59.472707 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 01:24:59.472707 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 01:24:59.460722 unknown[1149]: wrote ssh authorized keys file for user: core May 8 01:24:59.612716 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 01:24:59.754970 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 01:24:59.754970 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 01:24:59.789713 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 01:25:00.346253 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 01:25:00.404622 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 01:25:00.404622 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 01:25:00.437733 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 01:25:00.787779 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 01:25:01.015561 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 01:25:01.015561 ignition[1149]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 8 01:25:01.044839 ignition[1149]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 01:25:01.044839 ignition[1149]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 01:25:01.044839 ignition[1149]: INFO : files: files passed May 8 01:25:01.044839 ignition[1149]: INFO : POST message to Packet Timeline May 8 01:25:01.044839 ignition[1149]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 01:25:01.976173 ignition[1149]: INFO : GET result: OK May 8 01:25:02.346560 ignition[1149]: INFO : Ignition finished successfully May 8 01:25:02.350215 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 01:25:02.376849 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 01:25:02.377277 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 01:25:02.394976 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 01:25:02.395042 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 01:25:02.429204 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 01:25:02.448019 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 01:25:02.491989 initrd-setup-root-after-ignition[1186]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 01:25:02.491989 initrd-setup-root-after-ignition[1186]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 01:25:02.478875 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 01:25:02.541840 initrd-setup-root-after-ignition[1190]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 01:25:02.562330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 01:25:02.562381 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 01:25:02.581900 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 01:25:02.610676 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 01:25:02.621740 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 01:25:02.637827 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 01:25:02.681684 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 01:25:02.704702 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 01:25:02.731449 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 01:25:02.743748 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 01:25:02.764837 systemd[1]: Stopped target timers.target - Timer Units. May 8 01:25:02.782907 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 01:25:02.783112 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 01:25:02.811218 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 01:25:02.833119 systemd[1]: Stopped target basic.target - Basic System. May 8 01:25:02.851129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 01:25:02.870215 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 01:25:02.891115 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 01:25:02.912127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 01:25:02.932118 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 01:25:02.953159 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 01:25:02.975142 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 01:25:02.995115 systemd[1]: Stopped target swap.target - Swaps. May 8 01:25:03.013013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 01:25:03.013433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 01:25:03.048972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 01:25:03.059142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 01:25:03.079996 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 01:25:03.080454 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 01:25:03.101996 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 01:25:03.102402 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 01:25:03.134121 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 01:25:03.134603 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 01:25:03.154325 systemd[1]: Stopped target paths.target - Path Units. May 8 01:25:03.172989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 01:25:03.177837 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 01:25:03.194122 systemd[1]: Stopped target slices.target - Slice Units. May 8 01:25:03.214109 systemd[1]: Stopped target sockets.target - Socket Units. May 8 01:25:03.233098 systemd[1]: iscsid.socket: Deactivated successfully. May 8 01:25:03.233398 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 01:25:03.254256 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 01:25:03.254562 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 01:25:03.277232 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 01:25:03.277671 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 01:25:03.296208 systemd[1]: ignition-files.service: Deactivated successfully. May 8 01:25:03.417692 ignition[1211]: INFO : Ignition 2.20.0 May 8 01:25:03.417692 ignition[1211]: INFO : Stage: umount May 8 01:25:03.417692 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 01:25:03.417692 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 01:25:03.417692 ignition[1211]: INFO : umount: umount passed May 8 01:25:03.417692 ignition[1211]: INFO : POST message to Packet Timeline May 8 01:25:03.417692 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 01:25:03.296614 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 01:25:03.314204 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 01:25:03.314628 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 01:25:03.342621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 01:25:03.375742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 01:25:03.384568 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 01:25:03.384672 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 01:25:03.408834 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 01:25:03.408942 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 01:25:03.447478 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 01:25:03.448224 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 01:25:03.448276 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 01:25:03.473358 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 01:25:03.473426 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 01:25:04.620720 ignition[1211]: INFO : GET result: OK May 8 01:25:04.992939 ignition[1211]: INFO : Ignition finished successfully May 8 01:25:04.995101 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 01:25:04.995275 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 01:25:05.012217 systemd[1]: Stopped target network.target - Network. May 8 01:25:05.027721 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 01:25:05.027885 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 01:25:05.045875 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 01:25:05.046048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 01:25:05.063918 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 01:25:05.064086 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 01:25:05.081925 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 01:25:05.082095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 01:25:05.099902 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 01:25:05.100084 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 01:25:05.118238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 01:25:05.136022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 01:25:05.154531 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 01:25:05.154806 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 01:25:05.176802 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 01:25:05.176901 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 01:25:05.176945 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 01:25:05.183522 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 01:25:05.183903 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 01:25:05.183934 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 01:25:05.228881 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 01:25:05.239710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 01:25:05.239861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 01:25:05.260931 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 01:25:05.261100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 01:25:05.281248 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 01:25:05.281408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 01:25:05.298889 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 01:25:05.299067 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 01:25:05.321288 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 01:25:05.347008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 01:25:05.347207 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 01:25:05.348239 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 01:25:05.348611 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 01:25:05.373028 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 01:25:05.373065 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 01:25:05.400612 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 01:25:05.400641 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 01:25:05.420709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 01:25:05.420794 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 01:25:05.461724 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 01:25:05.461896 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 01:25:05.499693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 01:25:05.499855 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 01:25:05.557748 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 01:25:05.585682 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 01:25:05.585843 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 01:25:05.605140 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 01:25:05.837733 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). May 8 01:25:05.605282 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 01:25:05.626797 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 01:25:05.626948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 01:25:05.647897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 01:25:05.648070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 01:25:05.671492 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 01:25:05.671695 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 01:25:05.672806 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 01:25:05.673058 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 01:25:05.689648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 01:25:05.689901 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 01:25:05.711847 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 01:25:05.739941 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 01:25:05.790412 systemd[1]: Switching root. May 8 01:25:05.954649 systemd-journald[268]: Journal stopped May 8 01:25:07.667466 kernel: SELinux: policy capability network_peer_controls=1 May 8 01:25:07.667482 kernel: SELinux: policy capability open_perms=1 May 8 01:25:07.667489 kernel: SELinux: policy capability extended_socket_class=1 May 8 01:25:07.667498 kernel: SELinux: policy capability always_check_network=0 May 8 01:25:07.667505 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 01:25:07.667511 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 01:25:07.667517 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 01:25:07.667523 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 01:25:07.667529 kernel: audit: type=1403 audit(1746667506.047:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 01:25:07.667536 systemd[1]: Successfully loaded SELinux policy in 73.389ms. May 8 01:25:07.667544 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.248ms. May 8 01:25:07.667551 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 01:25:07.667558 systemd[1]: Detected architecture x86-64. May 8 01:25:07.667564 systemd[1]: Detected first boot. May 8 01:25:07.667571 systemd[1]: Hostname set to . May 8 01:25:07.667579 systemd[1]: Initializing machine ID from random generator. May 8 01:25:07.667585 zram_generator::config[1263]: No configuration found. May 8 01:25:07.667592 systemd[1]: Populated /etc with preset unit settings. May 8 01:25:07.667600 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 01:25:07.667606 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 01:25:07.667613 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 01:25:07.667619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 01:25:07.667627 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 01:25:07.667633 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 01:25:07.667641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 01:25:07.667647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 01:25:07.667654 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 01:25:07.667661 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 01:25:07.667668 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 01:25:07.667676 systemd[1]: Created slice user.slice - User and Session Slice. May 8 01:25:07.667682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 01:25:07.667689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 01:25:07.667696 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 01:25:07.667702 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 01:25:07.667709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 01:25:07.667716 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 01:25:07.667723 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... May 8 01:25:07.667731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 01:25:07.667739 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 01:25:07.667746 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 01:25:07.667754 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 01:25:07.667761 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 01:25:07.667768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 01:25:07.667775 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 01:25:07.667782 systemd[1]: Reached target slices.target - Slice Units. May 8 01:25:07.667789 systemd[1]: Reached target swap.target - Swaps. May 8 01:25:07.667796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 01:25:07.667803 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 01:25:07.667810 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 01:25:07.667817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 01:25:07.667825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 01:25:07.667832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 01:25:07.667839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 01:25:07.667846 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 01:25:07.667853 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 01:25:07.667860 systemd[1]: Mounting media.mount - External Media Directory... May 8 01:25:07.667867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 01:25:07.667874 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 01:25:07.667882 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 01:25:07.667889 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 01:25:07.667897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 01:25:07.667904 systemd[1]: Reached target machines.target - Containers. May 8 01:25:07.667911 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 01:25:07.667918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 01:25:07.667925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 01:25:07.667932 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 01:25:07.667940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 01:25:07.667947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 01:25:07.667954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 01:25:07.667961 kernel: ACPI: bus type drm_connector registered May 8 01:25:07.667967 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 01:25:07.667974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 01:25:07.667981 kernel: loop: module loaded May 8 01:25:07.667987 kernel: fuse: init (API version 7.39) May 8 01:25:07.667994 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 01:25:07.668002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 01:25:07.668009 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 01:25:07.668016 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 01:25:07.668023 systemd[1]: Stopped systemd-fsck-usr.service. May 8 01:25:07.668030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 01:25:07.668038 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 01:25:07.668054 systemd-journald[1367]: Collecting audit messages is disabled. May 8 01:25:07.668071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 01:25:07.668079 systemd-journald[1367]: Journal started May 8 01:25:07.668094 systemd-journald[1367]: Runtime Journal (/run/log/journal/2b0d1a704a3a44668ef712f1bab335eb) is 8M, max 639.9M, 631.9M free. May 8 01:25:06.487887 systemd[1]: Queued start job for default target multi-user.target. May 8 01:25:06.503352 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 01:25:06.503638 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 01:25:07.696535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 01:25:07.707536 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 01:25:07.731551 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 01:25:07.761527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 01:25:07.782666 systemd[1]: verity-setup.service: Deactivated successfully. May 8 01:25:07.782694 systemd[1]: Stopped verity-setup.service. May 8 01:25:07.807536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 01:25:07.815528 systemd[1]: Started systemd-journald.service - Journal Service. May 8 01:25:07.824949 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 01:25:07.834785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 01:25:07.844767 systemd[1]: Mounted media.mount - External Media Directory. May 8 01:25:07.854751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 01:25:07.864749 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 01:25:07.874735 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 01:25:07.884844 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 01:25:07.895874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 01:25:07.906894 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 01:25:07.907061 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 01:25:07.917995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 01:25:07.918215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 01:25:07.931367 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 01:25:07.931830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 01:25:07.943428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 01:25:07.944035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 01:25:07.957420 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 01:25:07.958025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 01:25:07.968428 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 01:25:07.969035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 01:25:07.979572 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 01:25:07.991532 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 01:25:08.004481 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 01:25:08.017489 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 01:25:08.030477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 01:25:08.066094 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 01:25:08.096822 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 01:25:08.109571 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 01:25:08.120713 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 01:25:08.120732 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 01:25:08.131341 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 01:25:08.154917 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 01:25:08.167145 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 01:25:08.176898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 01:25:08.178993 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 01:25:08.189082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 01:25:08.199644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 01:25:08.200278 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 01:25:08.205485 systemd-journald[1367]: Time spent on flushing to /var/log/journal/2b0d1a704a3a44668ef712f1bab335eb is 13.114ms for 1373 entries. May 8 01:25:08.205485 systemd-journald[1367]: System Journal (/var/log/journal/2b0d1a704a3a44668ef712f1bab335eb) is 8M, max 195.6M, 187.6M free. May 8 01:25:08.228529 systemd-journald[1367]: Received client request to flush runtime journal. May 8 01:25:08.217616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 01:25:08.218259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 01:25:08.228337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 01:25:08.246980 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 01:25:08.257551 kernel: loop0: detected capacity change from 0 to 138176 May 8 01:25:08.264375 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 01:25:08.277048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 01:25:08.277892 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. May 8 01:25:08.277907 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. May 8 01:25:08.287500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 01:25:08.294709 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 01:25:08.305765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 01:25:08.316747 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 01:25:08.327820 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 01:25:08.340551 kernel: loop1: detected capacity change from 0 to 8 May 8 01:25:08.343737 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 01:25:08.353797 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 01:25:08.368705 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 01:25:08.391547 kernel: loop2: detected capacity change from 0 to 147912 May 8 01:25:08.392673 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 01:25:08.404219 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 01:25:08.414166 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 01:25:08.414707 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 01:25:08.426828 udevadm[1411]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 01:25:08.435987 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 01:25:08.453653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 01:25:08.461553 kernel: loop3: detected capacity change from 0 to 210664 May 8 01:25:08.466803 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. May 8 01:25:08.466813 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. May 8 01:25:08.472566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 01:25:08.523506 kernel: loop4: detected capacity change from 0 to 138176 May 8 01:25:08.530186 ldconfig[1398]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 01:25:08.531756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 01:25:08.544501 kernel: loop5: detected capacity change from 0 to 8 May 8 01:25:08.551501 kernel: loop6: detected capacity change from 0 to 147912 May 8 01:25:08.570507 kernel: loop7: detected capacity change from 0 to 210664 May 8 01:25:08.581111 (sd-merge)[1435]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 8 01:25:08.581377 (sd-merge)[1435]: Merged extensions into '/usr'. May 8 01:25:08.583900 systemd[1]: Reload requested from client PID 1403 ('systemd-sysext') (unit systemd-sysext.service)... May 8 01:25:08.583908 systemd[1]: Reloading... May 8 01:25:08.608506 zram_generator::config[1462]: No configuration found. May 8 01:25:08.680109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 01:25:08.732282 systemd[1]: Reloading finished in 148 ms. May 8 01:25:08.751526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 01:25:08.762874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 01:25:08.786650 systemd[1]: Starting ensure-sysext.service... May 8 01:25:08.794508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 01:25:08.807559 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 01:25:08.817570 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 01:25:08.817747 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 01:25:08.818201 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 01:25:08.818355 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. May 8 01:25:08.818390 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. May 8 01:25:08.820466 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. May 8 01:25:08.820470 systemd-tmpfiles[1520]: Skipping /boot May 8 01:25:08.823261 systemd[1]: Reload requested from client PID 1519 ('systemctl') (unit ensure-sysext.service)... May 8 01:25:08.823269 systemd[1]: Reloading... May 8 01:25:08.825774 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. May 8 01:25:08.825779 systemd-tmpfiles[1520]: Skipping /boot May 8 01:25:08.834442 systemd-udevd[1521]: Using default interface naming scheme 'v255'. May 8 01:25:08.848555 zram_generator::config[1550]: No configuration found. May 8 01:25:08.885523 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1594) May 8 01:25:08.885653 kernel: mousedev: PS/2 mouse device common for all mice May 8 01:25:08.900512 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 8 01:25:08.907550 kernel: IPMI message handler: version 39.2 May 8 01:25:08.907592 kernel: ACPI: button: Sleep Button [SLPB] May 8 01:25:08.921501 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 8 01:25:08.921565 kernel: ipmi device interface May 8 01:25:08.925575 kernel: ACPI: button: Power Button [PWRF] May 8 01:25:08.942104 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 8 01:25:08.958768 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 8 01:25:08.958971 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 8 01:25:08.959104 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 8 01:25:08.959238 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 8 01:25:08.956586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 01:25:08.969524 kernel: ipmi_si: IPMI System Interface driver May 8 01:25:08.985479 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 8 01:25:08.991958 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 8 01:25:08.991971 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 8 01:25:08.991982 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 8 01:25:09.021631 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 8 01:25:09.021759 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 8 01:25:09.021837 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 8 01:25:09.021848 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 8 01:25:09.039500 kernel: iTCO_vendor_support: vendor-support=0 May 8 01:25:09.046311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. May 8 01:25:09.057733 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. May 8 01:25:09.057887 systemd[1]: Reloading finished in 234 ms. May 8 01:25:09.094630 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 8 01:25:09.105608 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 8 01:25:09.105761 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 8 01:25:09.115870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 01:25:09.136559 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 8 01:25:09.142502 kernel: intel_rapl_common: Found RAPL domain package May 8 01:25:09.142522 kernel: intel_rapl_common: Found RAPL domain core May 8 01:25:09.142531 kernel: intel_rapl_common: Found RAPL domain dram May 8 01:25:09.155544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 01:25:09.177320 systemd[1]: Finished ensure-sysext.service. May 8 01:25:09.199500 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 8 01:25:09.203725 systemd[1]: Reached target tpm2.target - Trusted Platform Module. May 8 01:25:09.206531 kernel: ipmi_ssif: IPMI SSIF Interface driver May 8 01:25:09.215621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 01:25:09.228633 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 01:25:09.238396 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 01:25:09.246566 augenrules[1723]: No rules May 8 01:25:09.250704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 01:25:09.251344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 01:25:09.261116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 01:25:09.271143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 01:25:09.282117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 01:25:09.291661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 01:25:09.302867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 01:25:09.313540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 01:25:09.314149 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 01:25:09.325529 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 01:25:09.326511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 01:25:09.327419 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 01:25:09.343250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 01:25:09.372835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 01:25:09.382540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 01:25:09.383143 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 01:25:09.394709 systemd[1]: audit-rules.service: Deactivated successfully. May 8 01:25:09.394819 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 01:25:09.395005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 01:25:09.395145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 01:25:09.395228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 01:25:09.395366 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 01:25:09.395447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 01:25:09.395588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 01:25:09.395668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 01:25:09.395803 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 01:25:09.395884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 01:25:09.396031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 01:25:09.396188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 01:25:09.400971 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 01:25:09.401003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 01:25:09.401035 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 01:25:09.401588 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 01:25:09.402349 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 01:25:09.402374 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 01:25:09.402587 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 01:25:09.408095 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 01:25:09.409901 lvm[1751]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 01:25:09.425941 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 01:25:09.456584 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 01:25:09.471678 systemd-resolved[1736]: Positive Trust Anchors: May 8 01:25:09.471686 systemd-resolved[1736]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 01:25:09.471711 systemd-resolved[1736]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 01:25:09.474323 systemd-resolved[1736]: Using system hostname 'ci-4230.1.1-n-cd63e3b163'. May 8 01:25:09.477900 systemd-networkd[1735]: lo: Link UP May 8 01:25:09.477903 systemd-networkd[1735]: lo: Gained carrier May 8 01:25:09.480445 systemd-networkd[1735]: bond0: netdev ready May 8 01:25:09.481484 systemd-networkd[1735]: Enumeration completed May 8 01:25:09.486873 systemd-networkd[1735]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. May 8 01:25:09.525698 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 01:25:09.537779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 01:25:09.548562 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 01:25:09.558688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 01:25:09.570472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 01:25:09.579532 systemd[1]: Reached target network.target - Network. May 8 01:25:09.587527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 01:25:09.598531 systemd[1]: Reached target sysinit.target - System Initialization. May 8 01:25:09.607578 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 01:25:09.618546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 01:25:09.630534 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 01:25:09.641559 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 01:25:09.641572 systemd[1]: Reached target paths.target - Path Units. May 8 01:25:09.649526 systemd[1]: Reached target time-set.target - System Time Set. May 8 01:25:09.658609 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 01:25:09.669570 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 01:25:09.680527 systemd[1]: Reached target timers.target - Timer Units. May 8 01:25:09.690421 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 01:25:09.701451 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 01:25:09.710834 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 01:25:09.737207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 01:25:09.747404 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 01:25:09.768535 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 8 01:25:09.782561 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 8 01:25:09.783699 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 01:25:09.785731 lvm[1774]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 01:25:09.786036 systemd-networkd[1735]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:95.network. May 8 01:25:09.795302 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 01:25:09.807218 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 01:25:09.819009 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 01:25:09.828944 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 01:25:09.840057 systemd[1]: Reached target sockets.target - Socket Units. May 8 01:25:09.849593 systemd[1]: Reached target basic.target - Basic System. May 8 01:25:09.858734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 01:25:09.858754 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 01:25:09.859491 systemd[1]: Starting containerd.service - containerd container runtime... May 8 01:25:09.870665 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 01:25:09.882531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 01:25:09.891203 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 01:25:09.900137 coreos-metadata[1779]: May 08 01:25:09.900 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 01:25:09.901014 coreos-metadata[1779]: May 08 01:25:09.900 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 01:25:09.902261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 01:25:09.902675 dbus-daemon[1780]: [system] SELinux support is enabled May 8 01:25:09.904078 jq[1783]: false May 8 01:25:09.912776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 01:25:09.913445 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 01:25:09.921708 extend-filesystems[1785]: Found loop4 May 8 01:25:09.921708 extend-filesystems[1785]: Found loop5 May 8 01:25:09.967658 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 8 01:25:09.967796 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks May 8 01:25:09.967807 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 8 01:25:09.967816 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1669) May 8 01:25:09.967825 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 8 01:25:09.927530 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 01:25:09.967885 extend-filesystems[1785]: Found loop6 May 8 01:25:09.967885 extend-filesystems[1785]: Found loop7 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda May 8 01:25:09.967885 extend-filesystems[1785]: Found sda1 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda2 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda3 May 8 01:25:09.967885 extend-filesystems[1785]: Found usr May 8 01:25:09.967885 extend-filesystems[1785]: Found sda4 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda6 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda7 May 8 01:25:09.967885 extend-filesystems[1785]: Found sda9 May 8 01:25:09.967885 extend-filesystems[1785]: Checking size of /dev/sda9 May 8 01:25:09.967885 extend-filesystems[1785]: Resized partition /dev/sda9 May 8 01:25:10.109629 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex May 8 01:25:10.109655 kernel: bond0: active interface up! May 8 01:25:09.945336 systemd-networkd[1735]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 8 01:25:10.109749 extend-filesystems[1793]: resize2fs 1.47.1 (20-May-2024) May 8 01:25:09.946644 systemd-networkd[1735]: enp1s0f0np0: Link UP May 8 01:25:09.946809 systemd-networkd[1735]: enp1s0f0np0: Gained carrier May 8 01:25:09.975766 systemd-networkd[1735]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. May 8 01:25:09.975944 systemd-networkd[1735]: enp1s0f1np1: Link UP May 8 01:25:09.976083 systemd-networkd[1735]: enp1s0f1np1: Gained carrier May 8 01:25:10.126938 update_engine[1810]: I20250508 01:25:10.117547 1810 main.cc:92] Flatcar Update Engine starting May 8 01:25:10.126938 update_engine[1810]: I20250508 01:25:10.118279 1810 update_check_scheduler.cc:74] Next update check in 8m50s May 8 01:25:09.982324 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 01:25:10.002700 systemd-networkd[1735]: bond0: Link UP May 8 01:25:10.002863 systemd-networkd[1735]: bond0: Gained carrier May 8 01:25:10.003033 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:10.003462 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:10.003672 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:10.003781 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:10.017216 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 01:25:10.043660 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 01:25:10.072761 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... May 8 01:25:10.093888 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 01:25:10.094280 systemd[1]: Starting update-engine.service - Update Engine... May 8 01:25:10.097118 systemd-logind[1805]: Watching system buttons on /dev/input/event3 (Power Button) May 8 01:25:10.097131 systemd-logind[1805]: Watching system buttons on /dev/input/event2 (Sleep Button) May 8 01:25:10.097141 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 8 01:25:10.097309 systemd-logind[1805]: New seat seat0. May 8 01:25:10.102225 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 01:25:10.126934 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 01:25:10.137381 jq[1811]: true May 8 01:25:10.147061 systemd[1]: Started systemd-logind.service - User Login Management. May 8 01:25:10.157815 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 01:25:10.175500 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 8 01:25:10.185740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 01:25:10.185848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 01:25:10.186030 systemd[1]: motdgen.service: Deactivated successfully. May 8 01:25:10.186130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 01:25:10.196072 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 01:25:10.196175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 01:25:10.209272 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 01:25:10.225832 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 01:25:10.227643 jq[1814]: true May 8 01:25:10.238297 (ntainerd)[1823]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 01:25:10.243020 tar[1813]: linux-amd64/helm May 8 01:25:10.243378 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 01:25:10.247530 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 8 01:25:10.247642 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. May 8 01:25:10.257741 systemd[1]: Started update-engine.service - Update Engine. May 8 01:25:10.279700 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 01:25:10.287570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 01:25:10.287675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 01:25:10.291498 bash[1851]: Updated "/home/core/.ssh/authorized_keys" May 8 01:25:10.298637 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 01:25:10.298717 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 01:25:10.326676 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 01:25:10.340149 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 01:25:10.350830 systemd[1]: issuegen.service: Deactivated successfully. May 8 01:25:10.350940 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 01:25:10.352782 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 01:25:10.378696 systemd[1]: Starting sshkeys.service... May 8 01:25:10.386338 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 01:25:10.399254 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 01:25:10.411421 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 01:25:10.412266 containerd[1823]: time="2025-05-08T01:25:10.412217417Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 01:25:10.422950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 01:25:10.424722 containerd[1823]: time="2025-05-08T01:25:10.424701997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425442 containerd[1823]: time="2025-05-08T01:25:10.425426256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 01:25:10.425475 containerd[1823]: time="2025-05-08T01:25:10.425441629Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 01:25:10.425475 containerd[1823]: time="2025-05-08T01:25:10.425450717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 01:25:10.425560 containerd[1823]: time="2025-05-08T01:25:10.425550726Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 01:25:10.425588 containerd[1823]: time="2025-05-08T01:25:10.425562692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425617 containerd[1823]: time="2025-05-08T01:25:10.425597152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 01:25:10.425617 containerd[1823]: time="2025-05-08T01:25:10.425604431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425723 containerd[1823]: time="2025-05-08T01:25:10.425712895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 01:25:10.425723 containerd[1823]: time="2025-05-08T01:25:10.425721490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425771 containerd[1823]: time="2025-05-08T01:25:10.425728907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 01:25:10.425771 containerd[1823]: time="2025-05-08T01:25:10.425733971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425818 containerd[1823]: time="2025-05-08T01:25:10.425776320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425900 containerd[1823]: time="2025-05-08T01:25:10.425892161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 01:25:10.425968 containerd[1823]: time="2025-05-08T01:25:10.425960274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 01:25:10.426000 containerd[1823]: time="2025-05-08T01:25:10.425968162Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 01:25:10.426026 containerd[1823]: time="2025-05-08T01:25:10.426009566Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 01:25:10.426051 containerd[1823]: time="2025-05-08T01:25:10.426035642Z" level=info msg="metadata content store policy set" policy=shared May 8 01:25:10.435050 coreos-metadata[1880]: May 08 01:25:10.435 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 01:25:10.436510 containerd[1823]: time="2025-05-08T01:25:10.436483738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 01:25:10.436549 containerd[1823]: time="2025-05-08T01:25:10.436517325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 01:25:10.436549 containerd[1823]: time="2025-05-08T01:25:10.436528357Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 01:25:10.436549 containerd[1823]: time="2025-05-08T01:25:10.436538465Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 01:25:10.436549 containerd[1823]: time="2025-05-08T01:25:10.436546320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 01:25:10.436646 containerd[1823]: time="2025-05-08T01:25:10.436636846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 01:25:10.439728 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 01:25:10.441721 containerd[1823]: time="2025-05-08T01:25:10.441707418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 01:25:10.441792 containerd[1823]: time="2025-05-08T01:25:10.441782599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 01:25:10.441821 containerd[1823]: time="2025-05-08T01:25:10.441793004Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 01:25:10.441821 containerd[1823]: time="2025-05-08T01:25:10.441802804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 01:25:10.441821 containerd[1823]: time="2025-05-08T01:25:10.441811058Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441821 containerd[1823]: time="2025-05-08T01:25:10.441818267Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441825879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441833284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441840996Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441848383Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441855188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441862048Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441873618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441880789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441887666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 01:25:10.441905 containerd[1823]: time="2025-05-08T01:25:10.441900337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441910110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441918617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441925222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441932185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441939239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441946854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441955517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441962413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441968821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441977402Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441988564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.441997016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442143 containerd[1823]: time="2025-05-08T01:25:10.442003010Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442333135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442347931Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442354269Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442361255Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442366509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442373623Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442379661Z" level=info msg="NRI interface is disabled by configuration." May 8 01:25:10.442434 containerd[1823]: time="2025-05-08T01:25:10.442389110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 01:25:10.442629 containerd[1823]: time="2025-05-08T01:25:10.442570405Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 01:25:10.442629 containerd[1823]: time="2025-05-08T01:25:10.442600807Z" level=info msg="Connect containerd service" May 8 01:25:10.442629 containerd[1823]: time="2025-05-08T01:25:10.442621903Z" level=info msg="using legacy CRI server" May 8 01:25:10.442629 containerd[1823]: time="2025-05-08T01:25:10.442626290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 01:25:10.442962 containerd[1823]: time="2025-05-08T01:25:10.442940318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 01:25:10.443302 containerd[1823]: time="2025-05-08T01:25:10.443290168Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 01:25:10.443425 containerd[1823]: time="2025-05-08T01:25:10.443404583Z" level=info msg="Start subscribing containerd event" May 8 01:25:10.443453 containerd[1823]: time="2025-05-08T01:25:10.443435471Z" level=info msg="Start recovering state" May 8 01:25:10.443453 containerd[1823]: time="2025-05-08T01:25:10.443437086Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 01:25:10.443508 containerd[1823]: time="2025-05-08T01:25:10.443464702Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 01:25:10.443508 containerd[1823]: time="2025-05-08T01:25:10.443470573Z" level=info msg="Start event monitor" May 8 01:25:10.443508 containerd[1823]: time="2025-05-08T01:25:10.443485237Z" level=info msg="Start snapshots syncer" May 8 01:25:10.443508 containerd[1823]: time="2025-05-08T01:25:10.443492870Z" level=info msg="Start cni network conf syncer for default" May 8 01:25:10.443508 containerd[1823]: time="2025-05-08T01:25:10.443504234Z" level=info msg="Start streaming server" May 8 01:25:10.443639 containerd[1823]: time="2025-05-08T01:25:10.443546602Z" level=info msg="containerd successfully booted in 0.031804s" May 8 01:25:10.450500 kernel: EXT4-fs (sda9): resized filesystem to 116605649 May 8 01:25:10.454498 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. May 8 01:25:10.463716 systemd[1]: Reached target getty.target - Login Prompts. May 8 01:25:10.470860 extend-filesystems[1793]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 01:25:10.470860 extend-filesystems[1793]: old_desc_blocks = 1, new_desc_blocks = 56 May 8 01:25:10.470860 extend-filesystems[1793]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. May 8 01:25:10.512548 extend-filesystems[1785]: Resized filesystem in /dev/sda9 May 8 01:25:10.512548 extend-filesystems[1785]: Found sdb May 8 01:25:10.471867 systemd[1]: Started containerd.service - containerd container runtime. May 8 01:25:10.537637 tar[1813]: linux-amd64/LICENSE May 8 01:25:10.537637 tar[1813]: linux-amd64/README.md May 8 01:25:10.501869 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 01:25:10.501981 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 01:25:10.540474 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 01:25:10.901139 coreos-metadata[1779]: May 08 01:25:10.901 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 8 01:25:11.139862 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:11.587592 systemd-networkd[1735]: bond0: Gained IPv6LL May 8 01:25:11.587965 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:11.588965 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 01:25:11.600264 systemd[1]: Reached target network-online.target - Network is Online. May 8 01:25:11.627782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:11.638218 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 01:25:11.656788 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 01:25:12.260103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:12.271107 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 01:25:12.685361 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 May 8 01:25:12.685505 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity May 8 01:25:12.723818 kubelet[1919]: E0508 01:25:12.723761 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 01:25:12.724990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 01:25:12.725068 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 01:25:12.725228 systemd[1]: kubelet.service: Consumed 560ms CPU time, 249.7M memory peak. May 8 01:25:14.359975 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 01:25:14.376843 systemd[1]: Started sshd@0-145.40.90.133:22-147.75.109.163:43882.service - OpenSSH per-connection server daemon (147.75.109.163:43882). May 8 01:25:14.439361 coreos-metadata[1779]: May 08 01:25:14.439 INFO Fetch successful May 8 01:25:14.451126 sshd[1940]: Accepted publickey for core from 147.75.109.163 port 43882 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:14.452249 sshd-session[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:14.459953 systemd-logind[1805]: New session 1 of user core. May 8 01:25:14.460923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 01:25:14.485993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 01:25:14.499215 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 01:25:14.500098 coreos-metadata[1880]: May 08 01:25:14.500 INFO Fetch successful May 8 01:25:14.512576 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 01:25:14.523378 (systemd)[1946]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 01:25:14.524699 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 01:25:14.535340 systemd-logind[1805]: New session c1 of user core. May 8 01:25:14.536049 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 8 01:25:14.542909 unknown[1880]: wrote ssh authorized keys file for user: core May 8 01:25:14.558418 update-ssh-keys[1956]: Updated "/home/core/.ssh/authorized_keys" May 8 01:25:14.558996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 01:25:14.570663 systemd[1]: Finished sshkeys.service. May 8 01:25:14.638600 systemd[1946]: Queued start job for default target default.target. May 8 01:25:14.648171 systemd[1946]: Created slice app.slice - User Application Slice. May 8 01:25:14.648205 systemd[1946]: Reached target paths.target - Paths. May 8 01:25:14.648227 systemd[1946]: Reached target timers.target - Timers. May 8 01:25:14.648910 systemd[1946]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 01:25:14.654654 systemd[1946]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 01:25:14.654683 systemd[1946]: Reached target sockets.target - Sockets. May 8 01:25:14.654706 systemd[1946]: Reached target basic.target - Basic System. May 8 01:25:14.654727 systemd[1946]: Reached target default.target - Main User Target. May 8 01:25:14.654742 systemd[1946]: Startup finished in 115ms. May 8 01:25:14.654775 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 01:25:14.666398 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 01:25:14.734094 systemd[1]: Started sshd@1-145.40.90.133:22-147.75.109.163:43886.service - OpenSSH per-connection server daemon (147.75.109.163:43886). May 8 01:25:14.770221 sshd[1965]: Accepted publickey for core from 147.75.109.163 port 43886 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:14.770863 sshd-session[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:14.773327 systemd-logind[1805]: New session 2 of user core. May 8 01:25:14.783729 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 01:25:14.845035 sshd[1968]: Connection closed by 147.75.109.163 port 43886 May 8 01:25:14.845199 sshd-session[1965]: pam_unix(sshd:session): session closed for user core May 8 01:25:14.857913 systemd[1]: sshd@1-145.40.90.133:22-147.75.109.163:43886.service: Deactivated successfully. May 8 01:25:14.858780 systemd[1]: session-2.scope: Deactivated successfully. May 8 01:25:14.859486 systemd-logind[1805]: Session 2 logged out. Waiting for processes to exit. May 8 01:25:14.860395 systemd[1]: Started sshd@2-145.40.90.133:22-147.75.109.163:43894.service - OpenSSH per-connection server daemon (147.75.109.163:43894). May 8 01:25:14.873551 systemd-logind[1805]: Removed session 2. May 8 01:25:14.900719 sshd[1973]: Accepted publickey for core from 147.75.109.163 port 43894 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:14.901390 sshd-session[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:14.904375 systemd-logind[1805]: New session 3 of user core. May 8 01:25:14.916722 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 01:25:14.927219 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 8 01:25:14.940152 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 01:25:14.950045 systemd[1]: Startup finished in 2.679s (kernel) + 23.208s (initrd) + 8.975s (userspace) = 34.863s. May 8 01:25:15.014205 sshd[1977]: Connection closed by 147.75.109.163 port 43894 May 8 01:25:15.014769 sshd-session[1973]: pam_unix(sshd:session): session closed for user core May 8 01:25:15.018655 systemd[1]: sshd@2-145.40.90.133:22-147.75.109.163:43894.service: Deactivated successfully. May 8 01:25:15.019261 login[1896]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 01:25:15.020588 login[1890]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 01:25:15.020999 systemd[1]: session-3.scope: Deactivated successfully. May 8 01:25:15.022549 systemd-logind[1805]: Session 3 logged out. Waiting for processes to exit. May 8 01:25:15.024137 systemd-logind[1805]: Removed session 3. May 8 01:25:15.027970 systemd-logind[1805]: New session 4 of user core. May 8 01:25:15.036683 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 01:25:15.038894 systemd-logind[1805]: New session 5 of user core. May 8 01:25:15.040193 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 01:25:16.639557 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:22.977629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 01:25:22.991763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:23.221473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:23.223468 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 01:25:23.256282 kubelet[2017]: E0508 01:25:23.256147 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 01:25:23.258346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 01:25:23.258424 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 01:25:23.258708 systemd[1]: kubelet.service: Consumed 140ms CPU time, 109.6M memory peak. May 8 01:25:25.050824 systemd[1]: Started sshd@3-145.40.90.133:22-147.75.109.163:49652.service - OpenSSH per-connection server daemon (147.75.109.163:49652). May 8 01:25:25.076946 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 49652 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.077737 sshd-session[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.080711 systemd-logind[1805]: New session 6 of user core. May 8 01:25:25.097930 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 01:25:25.151482 sshd[2040]: Connection closed by 147.75.109.163 port 49652 May 8 01:25:25.151688 sshd-session[2038]: pam_unix(sshd:session): session closed for user core May 8 01:25:25.167739 systemd[1]: sshd@3-145.40.90.133:22-147.75.109.163:49652.service: Deactivated successfully. May 8 01:25:25.168572 systemd[1]: session-6.scope: Deactivated successfully. May 8 01:25:25.169149 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. May 8 01:25:25.170130 systemd[1]: Started sshd@4-145.40.90.133:22-147.75.109.163:49654.service - OpenSSH per-connection server daemon (147.75.109.163:49654). May 8 01:25:25.170702 systemd-logind[1805]: Removed session 6. May 8 01:25:25.202032 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 49654 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.202858 sshd-session[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.206328 systemd-logind[1805]: New session 7 of user core. May 8 01:25:25.217773 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 01:25:25.269395 sshd[2048]: Connection closed by 147.75.109.163 port 49654 May 8 01:25:25.269564 sshd-session[2045]: pam_unix(sshd:session): session closed for user core May 8 01:25:25.279673 systemd[1]: sshd@4-145.40.90.133:22-147.75.109.163:49654.service: Deactivated successfully. May 8 01:25:25.280436 systemd[1]: session-7.scope: Deactivated successfully. May 8 01:25:25.280928 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. May 8 01:25:25.281790 systemd[1]: Started sshd@5-145.40.90.133:22-147.75.109.163:49656.service - OpenSSH per-connection server daemon (147.75.109.163:49656). May 8 01:25:25.282306 systemd-logind[1805]: Removed session 7. May 8 01:25:25.310017 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 49656 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.310657 sshd-session[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.313465 systemd-logind[1805]: New session 8 of user core. May 8 01:25:25.333852 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 01:25:25.396668 sshd[2056]: Connection closed by 147.75.109.163 port 49656 May 8 01:25:25.397440 sshd-session[2053]: pam_unix(sshd:session): session closed for user core May 8 01:25:25.415011 systemd[1]: sshd@5-145.40.90.133:22-147.75.109.163:49656.service: Deactivated successfully. May 8 01:25:25.415761 systemd[1]: session-8.scope: Deactivated successfully. May 8 01:25:25.416220 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. May 8 01:25:25.417088 systemd[1]: Started sshd@6-145.40.90.133:22-147.75.109.163:49672.service - OpenSSH per-connection server daemon (147.75.109.163:49672). May 8 01:25:25.417476 systemd-logind[1805]: Removed session 8. May 8 01:25:25.445520 sshd[2061]: Accepted publickey for core from 147.75.109.163 port 49672 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.446205 sshd-session[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.449350 systemd-logind[1805]: New session 9 of user core. May 8 01:25:25.457740 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 01:25:25.520588 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 01:25:25.520733 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 01:25:25.536272 sudo[2065]: pam_unix(sudo:session): session closed for user root May 8 01:25:25.537103 sshd[2064]: Connection closed by 147.75.109.163 port 49672 May 8 01:25:25.537297 sshd-session[2061]: pam_unix(sshd:session): session closed for user core May 8 01:25:25.549173 systemd[1]: sshd@6-145.40.90.133:22-147.75.109.163:49672.service: Deactivated successfully. May 8 01:25:25.550253 systemd[1]: session-9.scope: Deactivated successfully. May 8 01:25:25.550944 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. May 8 01:25:25.552153 systemd[1]: Started sshd@7-145.40.90.133:22-147.75.109.163:49686.service - OpenSSH per-connection server daemon (147.75.109.163:49686). May 8 01:25:25.552889 systemd-logind[1805]: Removed session 9. May 8 01:25:25.591099 sshd[2070]: Accepted publickey for core from 147.75.109.163 port 49686 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.592217 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.596547 systemd-logind[1805]: New session 10 of user core. May 8 01:25:25.613879 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 01:25:25.681379 sudo[2075]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 01:25:25.682299 sudo[2075]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 01:25:25.690954 sudo[2075]: pam_unix(sudo:session): session closed for user root May 8 01:25:25.705413 sudo[2074]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 01:25:25.706220 sudo[2074]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 01:25:25.746571 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 01:25:25.776385 augenrules[2097]: No rules May 8 01:25:25.776762 systemd[1]: audit-rules.service: Deactivated successfully. May 8 01:25:25.776892 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 01:25:25.777409 sudo[2074]: pam_unix(sudo:session): session closed for user root May 8 01:25:25.778040 sshd[2073]: Connection closed by 147.75.109.163 port 49686 May 8 01:25:25.778224 sshd-session[2070]: pam_unix(sshd:session): session closed for user core May 8 01:25:25.792714 systemd[1]: sshd@7-145.40.90.133:22-147.75.109.163:49686.service: Deactivated successfully. May 8 01:25:25.793475 systemd[1]: session-10.scope: Deactivated successfully. May 8 01:25:25.794207 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. May 8 01:25:25.794870 systemd[1]: Started sshd@8-145.40.90.133:22-147.75.109.163:49698.service - OpenSSH per-connection server daemon (147.75.109.163:49698). May 8 01:25:25.795396 systemd-logind[1805]: Removed session 10. May 8 01:25:25.825654 sshd[2105]: Accepted publickey for core from 147.75.109.163 port 49698 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:25:25.826477 sshd-session[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:25:25.829780 systemd-logind[1805]: New session 11 of user core. May 8 01:25:25.847855 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 01:25:25.912269 sudo[2109]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 01:25:25.913124 sudo[2109]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 01:25:26.257763 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 01:25:26.257805 (dockerd)[2135]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 01:25:26.534214 dockerd[2135]: time="2025-05-08T01:25:26.534124320Z" level=info msg="Starting up" May 8 01:25:26.598181 dockerd[2135]: time="2025-05-08T01:25:26.598129739Z" level=info msg="Loading containers: start." May 8 01:25:26.720507 kernel: Initializing XFRM netlink socket May 8 01:25:26.736305 systemd-timesyncd[1737]: Network configuration changed, trying to establish connection. May 8 01:25:26.795521 systemd-networkd[1735]: docker0: Link UP May 8 01:25:26.833634 dockerd[2135]: time="2025-05-08T01:25:26.833581336Z" level=info msg="Loading containers: done." May 8 01:25:26.844126 dockerd[2135]: time="2025-05-08T01:25:26.844101579Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 01:25:26.844222 dockerd[2135]: time="2025-05-08T01:25:26.844159876Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 01:25:26.844254 dockerd[2135]: time="2025-05-08T01:25:26.844225680Z" level=info msg="Daemon has completed initialization" May 8 01:25:26.874281 dockerd[2135]: time="2025-05-08T01:25:26.874258346Z" level=info msg="API listen on /run/docker.sock" May 8 01:25:26.874323 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 01:25:27.077974 systemd-timesyncd[1737]: Contacted time server [240b:4004:108:200:8314:1a08:4cee:26d2]:123 (2.flatcar.pool.ntp.org). May 8 01:25:27.078002 systemd-timesyncd[1737]: Initial clock synchronization to Thu 2025-05-08 01:25:26.703306 UTC. May 8 01:25:28.142804 containerd[1823]: time="2025-05-08T01:25:28.142754002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 01:25:28.782843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585324327.mount: Deactivated successfully. May 8 01:25:29.942265 containerd[1823]: time="2025-05-08T01:25:29.942241696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:29.942467 containerd[1823]: time="2025-05-08T01:25:29.942448436Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 01:25:29.942823 containerd[1823]: time="2025-05-08T01:25:29.942785391Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:29.944556 containerd[1823]: time="2025-05-08T01:25:29.944520433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:29.945107 containerd[1823]: time="2025-05-08T01:25:29.945063832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.802289403s" May 8 01:25:29.945107 containerd[1823]: time="2025-05-08T01:25:29.945080252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 01:25:29.955026 containerd[1823]: time="2025-05-08T01:25:29.954973519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 01:25:31.528049 containerd[1823]: time="2025-05-08T01:25:31.528001257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:31.528244 containerd[1823]: time="2025-05-08T01:25:31.528128319Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 01:25:31.528659 containerd[1823]: time="2025-05-08T01:25:31.528606708Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:31.530196 containerd[1823]: time="2025-05-08T01:25:31.530160677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:31.530809 containerd[1823]: time="2025-05-08T01:25:31.530775745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.57578438s" May 8 01:25:31.530809 containerd[1823]: time="2025-05-08T01:25:31.530807072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 01:25:31.542182 containerd[1823]: time="2025-05-08T01:25:31.542164436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 01:25:32.688133 containerd[1823]: time="2025-05-08T01:25:32.688081032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:32.688339 containerd[1823]: time="2025-05-08T01:25:32.688278911Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 01:25:32.688689 containerd[1823]: time="2025-05-08T01:25:32.688652116Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:32.690569 containerd[1823]: time="2025-05-08T01:25:32.690515906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:32.691025 containerd[1823]: time="2025-05-08T01:25:32.690983928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.148800707s" May 8 01:25:32.691025 containerd[1823]: time="2025-05-08T01:25:32.691000459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 01:25:32.701670 containerd[1823]: time="2025-05-08T01:25:32.701624256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 01:25:33.355423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 01:25:33.369699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:33.518888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499815805.mount: Deactivated successfully. May 8 01:25:33.628982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:33.631148 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 01:25:33.658585 kubelet[2474]: E0508 01:25:33.658505 2474 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 01:25:33.660008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 01:25:33.660123 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 01:25:33.660347 systemd[1]: kubelet.service: Consumed 97ms CPU time, 105.8M memory peak. May 8 01:25:33.798178 containerd[1823]: time="2025-05-08T01:25:33.798151167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:33.798381 containerd[1823]: time="2025-05-08T01:25:33.798363078Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 01:25:33.798689 containerd[1823]: time="2025-05-08T01:25:33.798679409Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:33.799528 containerd[1823]: time="2025-05-08T01:25:33.799516936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:33.800205 containerd[1823]: time="2025-05-08T01:25:33.800195656Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.098552174s" May 8 01:25:33.800228 containerd[1823]: time="2025-05-08T01:25:33.800209584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 01:25:33.811577 containerd[1823]: time="2025-05-08T01:25:33.811510523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 01:25:34.265661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380253960.mount: Deactivated successfully. May 8 01:25:34.815140 containerd[1823]: time="2025-05-08T01:25:34.815116208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:34.815347 containerd[1823]: time="2025-05-08T01:25:34.815299877Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 01:25:34.815706 containerd[1823]: time="2025-05-08T01:25:34.815695817Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:34.817559 containerd[1823]: time="2025-05-08T01:25:34.817520047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:34.818085 containerd[1823]: time="2025-05-08T01:25:34.818045418Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.006514064s" May 8 01:25:34.818085 containerd[1823]: time="2025-05-08T01:25:34.818060261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 01:25:34.828481 containerd[1823]: time="2025-05-08T01:25:34.828461430Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 01:25:35.243899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776307263.mount: Deactivated successfully. May 8 01:25:35.244939 containerd[1823]: time="2025-05-08T01:25:35.244894728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:35.245141 containerd[1823]: time="2025-05-08T01:25:35.245120180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 01:25:35.245516 containerd[1823]: time="2025-05-08T01:25:35.245479909Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:35.246664 containerd[1823]: time="2025-05-08T01:25:35.246624745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:35.247132 containerd[1823]: time="2025-05-08T01:25:35.247117714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 418.636442ms" May 8 01:25:35.247183 containerd[1823]: time="2025-05-08T01:25:35.247132556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 01:25:35.258621 containerd[1823]: time="2025-05-08T01:25:35.258595197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 01:25:35.740146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728239243.mount: Deactivated successfully. May 8 01:25:36.836241 containerd[1823]: time="2025-05-08T01:25:36.836217258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:36.836476 containerd[1823]: time="2025-05-08T01:25:36.836365726Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 01:25:36.836878 containerd[1823]: time="2025-05-08T01:25:36.836865754Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:36.838716 containerd[1823]: time="2025-05-08T01:25:36.838675750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:25:36.839257 containerd[1823]: time="2025-05-08T01:25:36.839211824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.580597174s" May 8 01:25:36.839257 containerd[1823]: time="2025-05-08T01:25:36.839232135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 01:25:39.027586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:39.027690 systemd[1]: kubelet.service: Consumed 97ms CPU time, 105.8M memory peak. May 8 01:25:39.049925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:39.062574 systemd[1]: Reload requested from client PID 2772 ('systemctl') (unit session-11.scope)... May 8 01:25:39.062582 systemd[1]: Reloading... May 8 01:25:39.129565 zram_generator::config[2818]: No configuration found. May 8 01:25:39.198178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 01:25:39.280437 systemd[1]: Reloading finished in 217 ms. May 8 01:25:39.323955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:39.325536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:39.326078 systemd[1]: kubelet.service: Deactivated successfully. May 8 01:25:39.326206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:39.326230 systemd[1]: kubelet.service: Consumed 64ms CPU time, 83.5M memory peak. May 8 01:25:39.327058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:39.548364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:39.550483 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 01:25:39.574210 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 01:25:39.574210 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 01:25:39.574210 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 01:25:39.575126 kubelet[2887]: I0508 01:25:39.575108 2887 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 01:25:39.735563 kubelet[2887]: I0508 01:25:39.735512 2887 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 01:25:39.735563 kubelet[2887]: I0508 01:25:39.735523 2887 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 01:25:39.735675 kubelet[2887]: I0508 01:25:39.735625 2887 server.go:927] "Client rotation is on, will bootstrap in background" May 8 01:25:39.745405 kubelet[2887]: I0508 01:25:39.745361 2887 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 01:25:39.746368 kubelet[2887]: E0508 01:25:39.746331 2887 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.760067 kubelet[2887]: I0508 01:25:39.760032 2887 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 01:25:39.762106 kubelet[2887]: I0508 01:25:39.762058 2887 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 01:25:39.762218 kubelet[2887]: I0508 01:25:39.762089 2887 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-cd63e3b163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 01:25:39.762707 kubelet[2887]: I0508 01:25:39.762673 2887 topology_manager.go:138] "Creating topology manager with none policy" May 8 01:25:39.762707 kubelet[2887]: I0508 01:25:39.762681 2887 container_manager_linux.go:301] "Creating device plugin manager" May 8 01:25:39.762758 kubelet[2887]: I0508 01:25:39.762748 2887 state_mem.go:36] "Initialized new in-memory state store" May 8 01:25:39.763430 kubelet[2887]: I0508 01:25:39.763394 2887 kubelet.go:400] "Attempting to sync node with API server" May 8 01:25:39.763430 kubelet[2887]: I0508 01:25:39.763417 2887 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 01:25:39.763430 kubelet[2887]: I0508 01:25:39.763428 2887 kubelet.go:312] "Adding apiserver pod source" May 8 01:25:39.763531 kubelet[2887]: I0508 01:25:39.763437 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 01:25:39.766588 kubelet[2887]: W0508 01:25:39.766532 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.766712 kubelet[2887]: W0508 01:25:39.766590 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-cd63e3b163&limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.766712 kubelet[2887]: E0508 01:25:39.766635 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.766712 kubelet[2887]: E0508 01:25:39.766665 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-cd63e3b163&limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.767357 kubelet[2887]: I0508 01:25:39.767319 2887 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 01:25:39.768610 kubelet[2887]: I0508 01:25:39.768574 2887 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 01:25:39.768610 kubelet[2887]: W0508 01:25:39.768603 2887 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 01:25:39.768934 kubelet[2887]: I0508 01:25:39.768927 2887 server.go:1264] "Started kubelet" May 8 01:25:39.769037 kubelet[2887]: I0508 01:25:39.768986 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 01:25:39.769037 kubelet[2887]: I0508 01:25:39.769010 2887 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 01:25:39.769244 kubelet[2887]: I0508 01:25:39.769235 2887 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 01:25:39.769737 kubelet[2887]: I0508 01:25:39.769729 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 01:25:39.769771 kubelet[2887]: I0508 01:25:39.769760 2887 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 01:25:39.769803 kubelet[2887]: E0508 01:25:39.769766 2887 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-cd63e3b163\" not found" May 8 01:25:39.769803 kubelet[2887]: I0508 01:25:39.769788 2887 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 01:25:39.769848 kubelet[2887]: I0508 01:25:39.769820 2887 reconciler.go:26] "Reconciler: start to sync state" May 8 01:25:39.769848 kubelet[2887]: I0508 01:25:39.769824 2887 server.go:455] "Adding debug handlers to kubelet server" May 8 01:25:39.769955 kubelet[2887]: E0508 01:25:39.769936 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-cd63e3b163?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="200ms" May 8 01:25:39.770009 kubelet[2887]: W0508 01:25:39.769982 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.770033 kubelet[2887]: E0508 01:25:39.770021 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.770055 kubelet[2887]: I0508 01:25:39.770046 2887 factory.go:221] Registration of the systemd container factory successfully May 8 01:25:39.770107 kubelet[2887]: I0508 01:25:39.770099 2887 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 01:25:39.771146 kubelet[2887]: I0508 01:25:39.771138 2887 factory.go:221] Registration of the containerd container factory successfully May 8 01:25:39.771380 kubelet[2887]: E0508 01:25:39.771368 2887 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 01:25:39.775241 kubelet[2887]: E0508 01:25:39.775135 2887 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.133:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-cd63e3b163.183d68df88a1c207 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-cd63e3b163,UID:ci-4230.1.1-n-cd63e3b163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-cd63e3b163,},FirstTimestamp:2025-05-08 01:25:39.768918535 +0000 UTC m=+0.216462905,LastTimestamp:2025-05-08 01:25:39.768918535 +0000 UTC m=+0.216462905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-cd63e3b163,}" May 8 01:25:39.780449 kubelet[2887]: I0508 01:25:39.780428 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 01:25:39.781004 kubelet[2887]: I0508 01:25:39.780993 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 01:25:39.781028 kubelet[2887]: I0508 01:25:39.781013 2887 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 01:25:39.781048 kubelet[2887]: I0508 01:25:39.781028 2887 kubelet.go:2337] "Starting kubelet main sync loop" May 8 01:25:39.781065 kubelet[2887]: E0508 01:25:39.781056 2887 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 01:25:39.781265 kubelet[2887]: W0508 01:25:39.781251 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.781293 kubelet[2887]: E0508 01:25:39.781272 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 8 01:25:39.781658 kubelet[2887]: I0508 01:25:39.781651 2887 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 01:25:39.781684 kubelet[2887]: I0508 01:25:39.781659 2887 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 01:25:39.781684 kubelet[2887]: I0508 01:25:39.781676 2887 state_mem.go:36] "Initialized new in-memory state store" May 8 01:25:39.782519 kubelet[2887]: I0508 01:25:39.782497 2887 policy_none.go:49] "None policy: Start" May 8 01:25:39.782883 kubelet[2887]: I0508 01:25:39.782875 2887 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 01:25:39.782915 kubelet[2887]: I0508 01:25:39.782887 2887 state_mem.go:35] "Initializing new in-memory state store" May 8 01:25:39.785963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 01:25:39.811764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 01:25:39.820294 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 01:25:39.836785 kubelet[2887]: I0508 01:25:39.836680 2887 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 01:25:39.837327 kubelet[2887]: I0508 01:25:39.837210 2887 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 01:25:39.837528 kubelet[2887]: I0508 01:25:39.837513 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 01:25:39.839678 kubelet[2887]: E0508 01:25:39.839626 2887 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-cd63e3b163\" not found" May 8 01:25:39.873915 kubelet[2887]: I0508 01:25:39.873842 2887 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.874649 kubelet[2887]: E0508 01:25:39.874591 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.881886 kubelet[2887]: I0508 01:25:39.881759 2887 topology_manager.go:215] "Topology Admit Handler" podUID="ed8a7842f9c5a0d65cc8bdb0a0251557" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.885108 kubelet[2887]: I0508 01:25:39.885050 2887 topology_manager.go:215] "Topology Admit Handler" podUID="5345db053fe037f5c6b414d8b88bda66" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.888446 kubelet[2887]: I0508 01:25:39.888397 2887 topology_manager.go:215] "Topology Admit Handler" podUID="0e9691d5100183154a8baecad1fc08bc" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.902593 systemd[1]: Created slice kubepods-burstable-poded8a7842f9c5a0d65cc8bdb0a0251557.slice - libcontainer container kubepods-burstable-poded8a7842f9c5a0d65cc8bdb0a0251557.slice. May 8 01:25:39.942934 systemd[1]: Created slice kubepods-burstable-pod0e9691d5100183154a8baecad1fc08bc.slice - libcontainer container kubepods-burstable-pod0e9691d5100183154a8baecad1fc08bc.slice. May 8 01:25:39.963911 systemd[1]: Created slice kubepods-burstable-pod5345db053fe037f5c6b414d8b88bda66.slice - libcontainer container kubepods-burstable-pod5345db053fe037f5c6b414d8b88bda66.slice. May 8 01:25:39.970567 kubelet[2887]: I0508 01:25:39.970476 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.970767 kubelet[2887]: I0508 01:25:39.970603 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.970767 kubelet[2887]: I0508 01:25:39.970693 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.970946 kubelet[2887]: I0508 01:25:39.970758 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.970946 kubelet[2887]: I0508 01:25:39.970844 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.970946 kubelet[2887]: I0508 01:25:39.970915 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e9691d5100183154a8baecad1fc08bc-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-cd63e3b163\" (UID: \"0e9691d5100183154a8baecad1fc08bc\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.971257 kubelet[2887]: I0508 01:25:39.970978 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.971257 kubelet[2887]: I0508 01:25:39.971024 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.971257 kubelet[2887]: I0508 01:25:39.971066 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:39.971257 kubelet[2887]: E0508 01:25:39.971088 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-cd63e3b163?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="400ms" May 8 01:25:40.076774 kubelet[2887]: I0508 01:25:40.076752 2887 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:40.077074 kubelet[2887]: E0508 01:25:40.077012 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:40.235889 containerd[1823]: time="2025-05-08T01:25:40.235759696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-cd63e3b163,Uid:ed8a7842f9c5a0d65cc8bdb0a0251557,Namespace:kube-system,Attempt:0,}" May 8 01:25:40.260432 containerd[1823]: time="2025-05-08T01:25:40.260402763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-cd63e3b163,Uid:0e9691d5100183154a8baecad1fc08bc,Namespace:kube-system,Attempt:0,}" May 8 01:25:40.269044 containerd[1823]: time="2025-05-08T01:25:40.268996730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-cd63e3b163,Uid:5345db053fe037f5c6b414d8b88bda66,Namespace:kube-system,Attempt:0,}" May 8 01:25:40.372070 kubelet[2887]: E0508 01:25:40.371827 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-cd63e3b163?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="800ms" May 8 01:25:40.478651 kubelet[2887]: I0508 01:25:40.478605 2887 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:40.478866 kubelet[2887]: E0508 01:25:40.478833 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:40.662798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502528816.mount: Deactivated successfully. May 8 01:25:40.663969 containerd[1823]: time="2025-05-08T01:25:40.663923159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 01:25:40.664222 containerd[1823]: time="2025-05-08T01:25:40.664171924Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 01:25:40.664826 containerd[1823]: time="2025-05-08T01:25:40.664792526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 01:25:40.665778 containerd[1823]: time="2025-05-08T01:25:40.665721671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 01:25:40.665943 containerd[1823]: time="2025-05-08T01:25:40.665888805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 01:25:40.666640 containerd[1823]: time="2025-05-08T01:25:40.666603676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 01:25:40.667212 containerd[1823]: time="2025-05-08T01:25:40.667176445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 01:25:40.667432 containerd[1823]: time="2025-05-08T01:25:40.667400447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.405747ms" May 8 01:25:40.667829 containerd[1823]: time="2025-05-08T01:25:40.667795161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 01:25:40.669862 containerd[1823]: time="2025-05-08T01:25:40.669819325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 409.355781ms" May 8 01:25:40.670393 containerd[1823]: time="2025-05-08T01:25:40.670358492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.329527ms" May 8 01:25:40.754623 containerd[1823]: time="2025-05-08T01:25:40.754576164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:40.754623 containerd[1823]: time="2025-05-08T01:25:40.754610153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:40.754623 containerd[1823]: time="2025-05-08T01:25:40.754620788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.754779 containerd[1823]: time="2025-05-08T01:25:40.754671104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.755485 containerd[1823]: time="2025-05-08T01:25:40.755451799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:40.755485 containerd[1823]: time="2025-05-08T01:25:40.755480874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:40.755573 containerd[1823]: time="2025-05-08T01:25:40.755487980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.755573 containerd[1823]: time="2025-05-08T01:25:40.755511470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:40.755615 containerd[1823]: time="2025-05-08T01:25:40.755571446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:40.755615 containerd[1823]: time="2025-05-08T01:25:40.755578715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.755615 containerd[1823]: time="2025-05-08T01:25:40.755575094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.755662 containerd[1823]: time="2025-05-08T01:25:40.755620039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:40.770621 systemd[1]: Started cri-containerd-623c4d60fab267e0d9b1338134d4dd3bfb382bc24ca8d1055aa9100bfcdf8829.scope - libcontainer container 623c4d60fab267e0d9b1338134d4dd3bfb382bc24ca8d1055aa9100bfcdf8829. May 8 01:25:40.771400 systemd[1]: Started cri-containerd-de082e70971935e98868c72ef81cefc244539544848a0d3125d73e59dd711606.scope - libcontainer container de082e70971935e98868c72ef81cefc244539544848a0d3125d73e59dd711606. May 8 01:25:40.772131 systemd[1]: Started cri-containerd-dfe5915a7c3592c474ffe20dd585b8d8c7fd6a3338f841a8e918752a5512701e.scope - libcontainer container dfe5915a7c3592c474ffe20dd585b8d8c7fd6a3338f841a8e918752a5512701e. May 8 01:25:40.794068 containerd[1823]: time="2025-05-08T01:25:40.794035883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-cd63e3b163,Uid:5345db053fe037f5c6b414d8b88bda66,Namespace:kube-system,Attempt:0,} returns sandbox id \"623c4d60fab267e0d9b1338134d4dd3bfb382bc24ca8d1055aa9100bfcdf8829\"" May 8 01:25:40.794482 containerd[1823]: time="2025-05-08T01:25:40.794465689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-cd63e3b163,Uid:0e9691d5100183154a8baecad1fc08bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"de082e70971935e98868c72ef81cefc244539544848a0d3125d73e59dd711606\"" May 8 01:25:40.795598 containerd[1823]: time="2025-05-08T01:25:40.795583694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-cd63e3b163,Uid:ed8a7842f9c5a0d65cc8bdb0a0251557,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfe5915a7c3592c474ffe20dd585b8d8c7fd6a3338f841a8e918752a5512701e\"" May 8 01:25:40.796041 containerd[1823]: time="2025-05-08T01:25:40.796031107Z" level=info msg="CreateContainer within sandbox \"de082e70971935e98868c72ef81cefc244539544848a0d3125d73e59dd711606\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 01:25:40.796989 containerd[1823]: time="2025-05-08T01:25:40.796973076Z" level=info msg="CreateContainer within sandbox \"623c4d60fab267e0d9b1338134d4dd3bfb382bc24ca8d1055aa9100bfcdf8829\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 01:25:40.797480 containerd[1823]: time="2025-05-08T01:25:40.797467630Z" level=info msg="CreateContainer within sandbox \"dfe5915a7c3592c474ffe20dd585b8d8c7fd6a3338f841a8e918752a5512701e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 01:25:40.802652 containerd[1823]: time="2025-05-08T01:25:40.802639695Z" level=info msg="CreateContainer within sandbox \"623c4d60fab267e0d9b1338134d4dd3bfb382bc24ca8d1055aa9100bfcdf8829\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aca852c51665fb14c5cbed3de96b9f4f23eeae739211b20d9becb3524bd844ab\"" May 8 01:25:40.802917 containerd[1823]: time="2025-05-08T01:25:40.802885922Z" level=info msg="StartContainer for \"aca852c51665fb14c5cbed3de96b9f4f23eeae739211b20d9becb3524bd844ab\"" May 8 01:25:40.803053 containerd[1823]: time="2025-05-08T01:25:40.803006703Z" level=info msg="CreateContainer within sandbox \"de082e70971935e98868c72ef81cefc244539544848a0d3125d73e59dd711606\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf17dae9ec1da77539785018960b21524213fc5cd6a2039448a2cd9e53925e10\"" May 8 01:25:40.803153 containerd[1823]: time="2025-05-08T01:25:40.803143753Z" level=info msg="StartContainer for \"bf17dae9ec1da77539785018960b21524213fc5cd6a2039448a2cd9e53925e10\"" May 8 01:25:40.804902 containerd[1823]: time="2025-05-08T01:25:40.804883013Z" level=info msg="CreateContainer within sandbox \"dfe5915a7c3592c474ffe20dd585b8d8c7fd6a3338f841a8e918752a5512701e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0fd537b4ccde55c4e8a8c65758eadd8abb4ec33b998bf70fe9ea27d30db72d12\"" May 8 01:25:40.805104 containerd[1823]: time="2025-05-08T01:25:40.805086937Z" level=info msg="StartContainer for \"0fd537b4ccde55c4e8a8c65758eadd8abb4ec33b998bf70fe9ea27d30db72d12\"" May 8 01:25:40.825782 systemd[1]: Started cri-containerd-aca852c51665fb14c5cbed3de96b9f4f23eeae739211b20d9becb3524bd844ab.scope - libcontainer container aca852c51665fb14c5cbed3de96b9f4f23eeae739211b20d9becb3524bd844ab. May 8 01:25:40.826438 systemd[1]: Started cri-containerd-bf17dae9ec1da77539785018960b21524213fc5cd6a2039448a2cd9e53925e10.scope - libcontainer container bf17dae9ec1da77539785018960b21524213fc5cd6a2039448a2cd9e53925e10. May 8 01:25:40.827976 systemd[1]: Started cri-containerd-0fd537b4ccde55c4e8a8c65758eadd8abb4ec33b998bf70fe9ea27d30db72d12.scope - libcontainer container 0fd537b4ccde55c4e8a8c65758eadd8abb4ec33b998bf70fe9ea27d30db72d12. May 8 01:25:40.855057 containerd[1823]: time="2025-05-08T01:25:40.855023298Z" level=info msg="StartContainer for \"bf17dae9ec1da77539785018960b21524213fc5cd6a2039448a2cd9e53925e10\" returns successfully" May 8 01:25:40.855147 containerd[1823]: time="2025-05-08T01:25:40.855023309Z" level=info msg="StartContainer for \"0fd537b4ccde55c4e8a8c65758eadd8abb4ec33b998bf70fe9ea27d30db72d12\" returns successfully" May 8 01:25:40.855147 containerd[1823]: time="2025-05-08T01:25:40.855032201Z" level=info msg="StartContainer for \"aca852c51665fb14c5cbed3de96b9f4f23eeae739211b20d9becb3524bd844ab\" returns successfully" May 8 01:25:41.280407 kubelet[2887]: I0508 01:25:41.280390 2887 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:41.387570 kubelet[2887]: E0508 01:25:41.387536 2887 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-cd63e3b163\" not found" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:41.492760 kubelet[2887]: I0508 01:25:41.492742 2887 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:41.765337 kubelet[2887]: I0508 01:25:41.765211 2887 apiserver.go:52] "Watching apiserver" May 8 01:25:41.770368 kubelet[2887]: I0508 01:25:41.770278 2887 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 01:25:41.803712 kubelet[2887]: E0508 01:25:41.803608 2887 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-n-cd63e3b163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-cd63e3b163" May 8 01:25:41.803712 kubelet[2887]: E0508 01:25:41.803638 2887 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:41.803959 kubelet[2887]: E0508 01:25:41.803652 2887 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:42.815348 kubelet[2887]: W0508 01:25:42.815285 2887 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:43.986811 systemd[1]: Reload requested from client PID 3202 ('systemctl') (unit session-11.scope)... May 8 01:25:43.986819 systemd[1]: Reloading... May 8 01:25:44.029574 zram_generator::config[3248]: No configuration found. May 8 01:25:44.104439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 01:25:44.195663 systemd[1]: Reloading finished in 208 ms. May 8 01:25:44.212971 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:44.213097 kubelet[2887]: I0508 01:25:44.213005 2887 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 01:25:44.219149 systemd[1]: kubelet.service: Deactivated successfully. May 8 01:25:44.219259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:44.219281 systemd[1]: kubelet.service: Consumed 727ms CPU time, 130.3M memory peak. May 8 01:25:44.233968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 01:25:44.439992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 01:25:44.442125 (kubelet)[3312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 01:25:44.464069 kubelet[3312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 01:25:44.464069 kubelet[3312]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 01:25:44.464069 kubelet[3312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 01:25:44.464294 kubelet[3312]: I0508 01:25:44.464081 3312 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 01:25:44.467492 kubelet[3312]: I0508 01:25:44.467445 3312 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 01:25:44.467492 kubelet[3312]: I0508 01:25:44.467459 3312 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 01:25:44.467632 kubelet[3312]: I0508 01:25:44.467591 3312 server.go:927] "Client rotation is on, will bootstrap in background" May 8 01:25:44.468382 kubelet[3312]: I0508 01:25:44.468346 3312 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 01:25:44.468951 kubelet[3312]: I0508 01:25:44.468940 3312 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 01:25:44.478911 kubelet[3312]: I0508 01:25:44.478875 3312 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 01:25:44.479039 kubelet[3312]: I0508 01:25:44.478996 3312 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 01:25:44.479127 kubelet[3312]: I0508 01:25:44.479011 3312 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-cd63e3b163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 01:25:44.479127 kubelet[3312]: I0508 01:25:44.479113 3312 topology_manager.go:138] "Creating topology manager with none policy" May 8 01:25:44.479127 kubelet[3312]: I0508 01:25:44.479118 3312 container_manager_linux.go:301] "Creating device plugin manager" May 8 01:25:44.479214 kubelet[3312]: I0508 01:25:44.479141 3312 state_mem.go:36] "Initialized new in-memory state store" May 8 01:25:44.479214 kubelet[3312]: I0508 01:25:44.479186 3312 kubelet.go:400] "Attempting to sync node with API server" May 8 01:25:44.479214 kubelet[3312]: I0508 01:25:44.479193 3312 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 01:25:44.479214 kubelet[3312]: I0508 01:25:44.479204 3312 kubelet.go:312] "Adding apiserver pod source" May 8 01:25:44.479214 kubelet[3312]: I0508 01:25:44.479213 3312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 01:25:44.479517 kubelet[3312]: I0508 01:25:44.479507 3312 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 01:25:44.479612 kubelet[3312]: I0508 01:25:44.479607 3312 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 01:25:44.480486 kubelet[3312]: I0508 01:25:44.480206 3312 server.go:1264] "Started kubelet" May 8 01:25:44.480486 kubelet[3312]: I0508 01:25:44.480359 3312 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 01:25:44.480932 kubelet[3312]: I0508 01:25:44.480644 3312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 01:25:44.480932 kubelet[3312]: I0508 01:25:44.480861 3312 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 01:25:44.481616 kubelet[3312]: I0508 01:25:44.481607 3312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 01:25:44.481664 kubelet[3312]: E0508 01:25:44.481652 3312 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 01:25:44.481702 kubelet[3312]: E0508 01:25:44.481668 3312 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-cd63e3b163\" not found" May 8 01:25:44.481702 kubelet[3312]: I0508 01:25:44.481678 3312 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 01:25:44.481759 kubelet[3312]: I0508 01:25:44.481715 3312 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 01:25:44.481820 kubelet[3312]: I0508 01:25:44.481814 3312 reconciler.go:26] "Reconciler: start to sync state" May 8 01:25:44.481848 kubelet[3312]: I0508 01:25:44.481820 3312 server.go:455] "Adding debug handlers to kubelet server" May 8 01:25:44.482136 kubelet[3312]: I0508 01:25:44.482119 3312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 01:25:44.482668 kubelet[3312]: I0508 01:25:44.482658 3312 factory.go:221] Registration of the containerd container factory successfully May 8 01:25:44.482668 kubelet[3312]: I0508 01:25:44.482668 3312 factory.go:221] Registration of the systemd container factory successfully May 8 01:25:44.487059 kubelet[3312]: I0508 01:25:44.487044 3312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 01:25:44.487590 kubelet[3312]: I0508 01:25:44.487553 3312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 01:25:44.487590 kubelet[3312]: I0508 01:25:44.487571 3312 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 01:25:44.487590 kubelet[3312]: I0508 01:25:44.487583 3312 kubelet.go:2337] "Starting kubelet main sync loop" May 8 01:25:44.487667 kubelet[3312]: E0508 01:25:44.487607 3312 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 01:25:44.500182 kubelet[3312]: I0508 01:25:44.500168 3312 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 01:25:44.500182 kubelet[3312]: I0508 01:25:44.500177 3312 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 01:25:44.500182 kubelet[3312]: I0508 01:25:44.500188 3312 state_mem.go:36] "Initialized new in-memory state store" May 8 01:25:44.500292 kubelet[3312]: I0508 01:25:44.500284 3312 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 01:25:44.500308 kubelet[3312]: I0508 01:25:44.500290 3312 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 01:25:44.500308 kubelet[3312]: I0508 01:25:44.500301 3312 policy_none.go:49] "None policy: Start" May 8 01:25:44.500548 kubelet[3312]: I0508 01:25:44.500525 3312 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 01:25:44.500548 kubelet[3312]: I0508 01:25:44.500535 3312 state_mem.go:35] "Initializing new in-memory state store" May 8 01:25:44.500625 kubelet[3312]: I0508 01:25:44.500620 3312 state_mem.go:75] "Updated machine memory state" May 8 01:25:44.502599 kubelet[3312]: I0508 01:25:44.502555 3312 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 01:25:44.502696 kubelet[3312]: I0508 01:25:44.502635 3312 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 01:25:44.502696 kubelet[3312]: I0508 01:25:44.502687 3312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 01:25:44.588131 kubelet[3312]: I0508 01:25:44.587987 3312 topology_manager.go:215] "Topology Admit Handler" podUID="ed8a7842f9c5a0d65cc8bdb0a0251557" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.588434 kubelet[3312]: I0508 01:25:44.588191 3312 topology_manager.go:215] "Topology Admit Handler" podUID="5345db053fe037f5c6b414d8b88bda66" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.588434 kubelet[3312]: I0508 01:25:44.588385 3312 topology_manager.go:215] "Topology Admit Handler" podUID="0e9691d5100183154a8baecad1fc08bc" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.588960 kubelet[3312]: I0508 01:25:44.588902 3312 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.596014 kubelet[3312]: W0508 01:25:44.595942 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:44.596868 kubelet[3312]: W0508 01:25:44.596758 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:44.596868 kubelet[3312]: W0508 01:25:44.596773 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:44.597168 kubelet[3312]: E0508 01:25:44.596910 3312 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.599829 kubelet[3312]: I0508 01:25:44.599746 3312 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.599996 kubelet[3312]: I0508 01:25:44.599918 3312 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.783985 kubelet[3312]: I0508 01:25:44.783688 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.783985 kubelet[3312]: I0508 01:25:44.783808 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.783985 kubelet[3312]: I0508 01:25:44.783892 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e9691d5100183154a8baecad1fc08bc-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-cd63e3b163\" (UID: \"0e9691d5100183154a8baecad1fc08bc\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.783985 kubelet[3312]: I0508 01:25:44.783956 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.784519 kubelet[3312]: I0508 01:25:44.784017 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.784519 kubelet[3312]: I0508 01:25:44.784089 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed8a7842f9c5a0d65cc8bdb0a0251557-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" (UID: \"ed8a7842f9c5a0d65cc8bdb0a0251557\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.784519 kubelet[3312]: I0508 01:25:44.784164 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.784519 kubelet[3312]: I0508 01:25:44.784275 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:44.784519 kubelet[3312]: I0508 01:25:44.784372 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5345db053fe037f5c6b414d8b88bda66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" (UID: \"5345db053fe037f5c6b414d8b88bda66\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:45.013734 sudo[3355]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 01:25:45.014669 sudo[3355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 01:25:45.393094 sudo[3355]: pam_unix(sudo:session): session closed for user root May 8 01:25:45.480361 kubelet[3312]: I0508 01:25:45.480314 3312 apiserver.go:52] "Watching apiserver" May 8 01:25:45.482532 kubelet[3312]: I0508 01:25:45.482490 3312 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 01:25:45.494897 kubelet[3312]: W0508 01:25:45.494887 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:45.494897 kubelet[3312]: W0508 01:25:45.494894 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 01:25:45.494966 kubelet[3312]: E0508 01:25:45.494917 3312 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-cd63e3b163\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" May 8 01:25:45.494966 kubelet[3312]: E0508 01:25:45.494927 3312 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-cd63e3b163\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" May 8 01:25:45.501977 kubelet[3312]: I0508 01:25:45.501921 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-cd63e3b163" podStartSLOduration=3.50188999 podStartE2EDuration="3.50188999s" podCreationTimestamp="2025-05-08 01:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:25:45.50175159 +0000 UTC m=+1.057776617" watchObservedRunningTime="2025-05-08 01:25:45.50188999 +0000 UTC m=+1.057915017" May 8 01:25:45.509335 kubelet[3312]: I0508 01:25:45.509284 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-cd63e3b163" podStartSLOduration=1.509276157 podStartE2EDuration="1.509276157s" podCreationTimestamp="2025-05-08 01:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:25:45.505939575 +0000 UTC m=+1.061964602" watchObservedRunningTime="2025-05-08 01:25:45.509276157 +0000 UTC m=+1.065301182" May 8 01:25:45.509335 kubelet[3312]: I0508 01:25:45.509319 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-cd63e3b163" podStartSLOduration=1.509316628 podStartE2EDuration="1.509316628s" podCreationTimestamp="2025-05-08 01:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:25:45.509307755 +0000 UTC m=+1.065332782" watchObservedRunningTime="2025-05-08 01:25:45.509316628 +0000 UTC m=+1.065341652" May 8 01:25:46.563385 sudo[2109]: pam_unix(sudo:session): session closed for user root May 8 01:25:46.564200 sshd[2108]: Connection closed by 147.75.109.163 port 49698 May 8 01:25:46.564362 sshd-session[2105]: pam_unix(sshd:session): session closed for user core May 8 01:25:46.566033 systemd[1]: sshd@8-145.40.90.133:22-147.75.109.163:49698.service: Deactivated successfully. May 8 01:25:46.567011 systemd[1]: session-11.scope: Deactivated successfully. May 8 01:25:46.567105 systemd[1]: session-11.scope: Consumed 3.335s CPU time, 307.2M memory peak. May 8 01:25:46.568156 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. May 8 01:25:46.568738 systemd-logind[1805]: Removed session 11. May 8 01:25:55.095984 update_engine[1810]: I20250508 01:25:55.095815 1810 update_attempter.cc:509] Updating boot flags... May 8 01:25:55.172524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3445) May 8 01:25:55.268530 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3449) May 8 01:25:58.269630 kubelet[3312]: I0508 01:25:58.269551 3312 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 01:25:58.270700 containerd[1823]: time="2025-05-08T01:25:58.270228909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 01:25:58.271332 kubelet[3312]: I0508 01:25:58.270691 3312 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 01:25:59.008396 kubelet[3312]: I0508 01:25:59.008322 3312 topology_manager.go:215] "Topology Admit Handler" podUID="16e7b3c5-2218-42a3-a734-8ad869f3bab8" podNamespace="kube-system" podName="kube-proxy-svn4s" May 8 01:25:59.015254 kubelet[3312]: I0508 01:25:59.015182 3312 topology_manager.go:215] "Topology Admit Handler" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" podNamespace="kube-system" podName="cilium-9ggq6" May 8 01:25:59.026235 systemd[1]: Created slice kubepods-besteffort-pod16e7b3c5_2218_42a3_a734_8ad869f3bab8.slice - libcontainer container kubepods-besteffort-pod16e7b3c5_2218_42a3_a734_8ad869f3bab8.slice. May 8 01:25:59.046733 systemd[1]: Created slice kubepods-burstable-pod6b4068de_4aa8_412c_81ba_a166136a59c4.slice - libcontainer container kubepods-burstable-pod6b4068de_4aa8_412c_81ba_a166136a59c4.slice. May 8 01:25:59.081755 kubelet[3312]: I0508 01:25:59.081644 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4068de-4aa8-412c-81ba-a166136a59c4-clustermesh-secrets\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.081755 kubelet[3312]: I0508 01:25:59.081739 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-config-path\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082081 kubelet[3312]: I0508 01:25:59.081829 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-hubble-tls\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082081 kubelet[3312]: I0508 01:25:59.081939 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-etc-cni-netd\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082081 kubelet[3312]: I0508 01:25:59.081992 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-xtables-lock\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082081 kubelet[3312]: I0508 01:25:59.082040 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e7b3c5-2218-42a3-a734-8ad869f3bab8-xtables-lock\") pod \"kube-proxy-svn4s\" (UID: \"16e7b3c5-2218-42a3-a734-8ad869f3bab8\") " pod="kube-system/kube-proxy-svn4s" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082091 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxpr\" (UniqueName: \"kubernetes.io/projected/16e7b3c5-2218-42a3-a734-8ad869f3bab8-kube-api-access-7fxpr\") pod \"kube-proxy-svn4s\" (UID: \"16e7b3c5-2218-42a3-a734-8ad869f3bab8\") " pod="kube-system/kube-proxy-svn4s" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082144 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-bpf-maps\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082208 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-lib-modules\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082254 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-run\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082303 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-net\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.082484 kubelet[3312]: I0508 01:25:59.082349 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-hostproc\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082394 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cni-path\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082440 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e7b3c5-2218-42a3-a734-8ad869f3bab8-lib-modules\") pod \"kube-proxy-svn4s\" (UID: \"16e7b3c5-2218-42a3-a734-8ad869f3bab8\") " pod="kube-system/kube-proxy-svn4s" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082489 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sdk\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-kube-api-access-k9sdk\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082604 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16e7b3c5-2218-42a3-a734-8ad869f3bab8-kube-proxy\") pod \"kube-proxy-svn4s\" (UID: \"16e7b3c5-2218-42a3-a734-8ad869f3bab8\") " pod="kube-system/kube-proxy-svn4s" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082692 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-cgroup\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.083044 kubelet[3312]: I0508 01:25:59.082756 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-kernel\") pod \"cilium-9ggq6\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " pod="kube-system/cilium-9ggq6" May 8 01:25:59.335331 kubelet[3312]: I0508 01:25:59.335254 3312 topology_manager.go:215] "Topology Admit Handler" podUID="b922d05a-7343-4943-9145-47176afe120b" podNamespace="kube-system" podName="cilium-operator-599987898-s8p5k" May 8 01:25:59.342864 containerd[1823]: time="2025-05-08T01:25:59.342797360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svn4s,Uid:16e7b3c5-2218-42a3-a734-8ad869f3bab8,Namespace:kube-system,Attempt:0,}" May 8 01:25:59.344880 systemd[1]: Created slice kubepods-besteffort-podb922d05a_7343_4943_9145_47176afe120b.slice - libcontainer container kubepods-besteffort-podb922d05a_7343_4943_9145_47176afe120b.slice. May 8 01:25:59.350250 containerd[1823]: time="2025-05-08T01:25:59.350227602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ggq6,Uid:6b4068de-4aa8-412c-81ba-a166136a59c4,Namespace:kube-system,Attempt:0,}" May 8 01:25:59.353398 containerd[1823]: time="2025-05-08T01:25:59.353347384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:59.353469 containerd[1823]: time="2025-05-08T01:25:59.353412584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:59.353469 containerd[1823]: time="2025-05-08T01:25:59.353420565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.353469 containerd[1823]: time="2025-05-08T01:25:59.353461278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.359542 containerd[1823]: time="2025-05-08T01:25:59.359489462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:59.359542 containerd[1823]: time="2025-05-08T01:25:59.359527112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:59.359542 containerd[1823]: time="2025-05-08T01:25:59.359534278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.359650 containerd[1823]: time="2025-05-08T01:25:59.359572937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.370851 systemd[1]: Started cri-containerd-9a1fe8ba423f8264ac4e4bb9d96b3205be140cab738b3513450aede39ead52cb.scope - libcontainer container 9a1fe8ba423f8264ac4e4bb9d96b3205be140cab738b3513450aede39ead52cb. May 8 01:25:59.372505 systemd[1]: Started cri-containerd-dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e.scope - libcontainer container dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e. May 8 01:25:59.380951 containerd[1823]: time="2025-05-08T01:25:59.380929409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svn4s,Uid:16e7b3c5-2218-42a3-a734-8ad869f3bab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a1fe8ba423f8264ac4e4bb9d96b3205be140cab738b3513450aede39ead52cb\"" May 8 01:25:59.381916 containerd[1823]: time="2025-05-08T01:25:59.381900258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ggq6,Uid:6b4068de-4aa8-412c-81ba-a166136a59c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\"" May 8 01:25:59.382298 containerd[1823]: time="2025-05-08T01:25:59.382282261Z" level=info msg="CreateContainer within sandbox \"9a1fe8ba423f8264ac4e4bb9d96b3205be140cab738b3513450aede39ead52cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 01:25:59.382514 containerd[1823]: time="2025-05-08T01:25:59.382500668Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 01:25:59.386131 kubelet[3312]: I0508 01:25:59.386088 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmkx\" (UniqueName: \"kubernetes.io/projected/b922d05a-7343-4943-9145-47176afe120b-kube-api-access-pgmkx\") pod \"cilium-operator-599987898-s8p5k\" (UID: \"b922d05a-7343-4943-9145-47176afe120b\") " pod="kube-system/cilium-operator-599987898-s8p5k" May 8 01:25:59.386131 kubelet[3312]: I0508 01:25:59.386107 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b922d05a-7343-4943-9145-47176afe120b-cilium-config-path\") pod \"cilium-operator-599987898-s8p5k\" (UID: \"b922d05a-7343-4943-9145-47176afe120b\") " pod="kube-system/cilium-operator-599987898-s8p5k" May 8 01:25:59.387814 containerd[1823]: time="2025-05-08T01:25:59.387769681Z" level=info msg="CreateContainer within sandbox \"9a1fe8ba423f8264ac4e4bb9d96b3205be140cab738b3513450aede39ead52cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4d1ac7913bd86ad8c8395f68c1ed6fc89bc2d91cadab46c174a1cf0d84c38d23\"" May 8 01:25:59.388047 containerd[1823]: time="2025-05-08T01:25:59.388005648Z" level=info msg="StartContainer for \"4d1ac7913bd86ad8c8395f68c1ed6fc89bc2d91cadab46c174a1cf0d84c38d23\"" May 8 01:25:59.417768 systemd[1]: Started cri-containerd-4d1ac7913bd86ad8c8395f68c1ed6fc89bc2d91cadab46c174a1cf0d84c38d23.scope - libcontainer container 4d1ac7913bd86ad8c8395f68c1ed6fc89bc2d91cadab46c174a1cf0d84c38d23. May 8 01:25:59.445704 containerd[1823]: time="2025-05-08T01:25:59.445643906Z" level=info msg="StartContainer for \"4d1ac7913bd86ad8c8395f68c1ed6fc89bc2d91cadab46c174a1cf0d84c38d23\" returns successfully" May 8 01:25:59.647839 containerd[1823]: time="2025-05-08T01:25:59.647643673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s8p5k,Uid:b922d05a-7343-4943-9145-47176afe120b,Namespace:kube-system,Attempt:0,}" May 8 01:25:59.659657 containerd[1823]: time="2025-05-08T01:25:59.659405264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:25:59.659657 containerd[1823]: time="2025-05-08T01:25:59.659646739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:25:59.659657 containerd[1823]: time="2025-05-08T01:25:59.659655003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.659803 containerd[1823]: time="2025-05-08T01:25:59.659694510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:25:59.680638 systemd[1]: Started cri-containerd-17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c.scope - libcontainer container 17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c. May 8 01:25:59.703546 containerd[1823]: time="2025-05-08T01:25:59.703524404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s8p5k,Uid:b922d05a-7343-4943-9145-47176afe120b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\"" May 8 01:26:02.795907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983071229.mount: Deactivated successfully. May 8 01:26:03.585890 containerd[1823]: time="2025-05-08T01:26:03.585838334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:26:03.586090 containerd[1823]: time="2025-05-08T01:26:03.586057497Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 01:26:03.586429 containerd[1823]: time="2025-05-08T01:26:03.586388661Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:26:03.587198 containerd[1823]: time="2025-05-08T01:26:03.587158995Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.204642147s" May 8 01:26:03.587198 containerd[1823]: time="2025-05-08T01:26:03.587172714Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 01:26:03.587832 containerd[1823]: time="2025-05-08T01:26:03.587785680Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 01:26:03.588437 containerd[1823]: time="2025-05-08T01:26:03.588395368Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 01:26:03.592747 containerd[1823]: time="2025-05-08T01:26:03.592705086Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\"" May 8 01:26:03.593003 containerd[1823]: time="2025-05-08T01:26:03.592943384Z" level=info msg="StartContainer for \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\"" May 8 01:26:03.611711 systemd[1]: Started cri-containerd-19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb.scope - libcontainer container 19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb. May 8 01:26:03.622748 containerd[1823]: time="2025-05-08T01:26:03.622727336Z" level=info msg="StartContainer for \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\" returns successfully" May 8 01:26:03.629486 systemd[1]: cri-containerd-19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb.scope: Deactivated successfully. May 8 01:26:04.551157 kubelet[3312]: I0508 01:26:04.551128 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-svn4s" podStartSLOduration=6.551117068 podStartE2EDuration="6.551117068s" podCreationTimestamp="2025-05-08 01:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:25:59.540891116 +0000 UTC m=+15.096916228" watchObservedRunningTime="2025-05-08 01:26:04.551117068 +0000 UTC m=+20.107142093" May 8 01:26:04.596417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb-rootfs.mount: Deactivated successfully. May 8 01:26:04.785566 containerd[1823]: time="2025-05-08T01:26:04.785526680Z" level=info msg="shim disconnected" id=19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb namespace=k8s.io May 8 01:26:04.785566 containerd[1823]: time="2025-05-08T01:26:04.785559698Z" level=warning msg="cleaning up after shim disconnected" id=19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb namespace=k8s.io May 8 01:26:04.785566 containerd[1823]: time="2025-05-08T01:26:04.785564737Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:26:04.791685 containerd[1823]: time="2025-05-08T01:26:04.791664227Z" level=warning msg="cleanup warnings time=\"2025-05-08T01:26:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 01:26:05.479451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416954399.mount: Deactivated successfully. May 8 01:26:05.539367 containerd[1823]: time="2025-05-08T01:26:05.539314205Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 01:26:05.559033 containerd[1823]: time="2025-05-08T01:26:05.559009086Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\"" May 8 01:26:05.559310 containerd[1823]: time="2025-05-08T01:26:05.559294794Z" level=info msg="StartContainer for \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\"" May 8 01:26:05.560376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273347499.mount: Deactivated successfully. May 8 01:26:05.577617 systemd[1]: Started cri-containerd-e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192.scope - libcontainer container e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192. May 8 01:26:05.588655 containerd[1823]: time="2025-05-08T01:26:05.588632469Z" level=info msg="StartContainer for \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\" returns successfully" May 8 01:26:05.597265 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 01:26:05.597432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 01:26:05.597538 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 01:26:05.605769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 01:26:05.606997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 01:26:05.607339 systemd[1]: cri-containerd-e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192.scope: Deactivated successfully. May 8 01:26:05.612711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192-rootfs.mount: Deactivated successfully. May 8 01:26:05.613367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 01:26:05.697155 containerd[1823]: time="2025-05-08T01:26:05.697116484Z" level=info msg="shim disconnected" id=e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192 namespace=k8s.io May 8 01:26:05.697155 containerd[1823]: time="2025-05-08T01:26:05.697151897Z" level=warning msg="cleaning up after shim disconnected" id=e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192 namespace=k8s.io May 8 01:26:05.697155 containerd[1823]: time="2025-05-08T01:26:05.697159906Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:26:05.763553 containerd[1823]: time="2025-05-08T01:26:05.763489778Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:26:05.763713 containerd[1823]: time="2025-05-08T01:26:05.763691532Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 01:26:05.763975 containerd[1823]: time="2025-05-08T01:26:05.763961481Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 01:26:05.765019 containerd[1823]: time="2025-05-08T01:26:05.765004472Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.177204274s" May 8 01:26:05.765062 containerd[1823]: time="2025-05-08T01:26:05.765020697Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 01:26:05.765961 containerd[1823]: time="2025-05-08T01:26:05.765948892Z" level=info msg="CreateContainer within sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 01:26:05.770652 containerd[1823]: time="2025-05-08T01:26:05.770609262Z" level=info msg="CreateContainer within sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\"" May 8 01:26:05.770864 containerd[1823]: time="2025-05-08T01:26:05.770808995Z" level=info msg="StartContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\"" May 8 01:26:05.786763 systemd[1]: Started cri-containerd-1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495.scope - libcontainer container 1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495. May 8 01:26:05.797688 containerd[1823]: time="2025-05-08T01:26:05.797667672Z" level=info msg="StartContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" returns successfully" May 8 01:26:06.551953 containerd[1823]: time="2025-05-08T01:26:06.551869090Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 01:26:06.561041 kubelet[3312]: I0508 01:26:06.560953 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s8p5k" podStartSLOduration=1.499679361 podStartE2EDuration="7.560924559s" podCreationTimestamp="2025-05-08 01:25:59 +0000 UTC" firstStartedPulling="2025-05-08 01:25:59.704121126 +0000 UTC m=+15.260146154" lastFinishedPulling="2025-05-08 01:26:05.765366325 +0000 UTC m=+21.321391352" observedRunningTime="2025-05-08 01:26:06.560590213 +0000 UTC m=+22.116615319" watchObservedRunningTime="2025-05-08 01:26:06.560924559 +0000 UTC m=+22.116949585" May 8 01:26:06.565851 containerd[1823]: time="2025-05-08T01:26:06.565823361Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\"" May 8 01:26:06.566289 containerd[1823]: time="2025-05-08T01:26:06.566213741Z" level=info msg="StartContainer for \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\"" May 8 01:26:06.587747 systemd[1]: Started cri-containerd-a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e.scope - libcontainer container a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e. May 8 01:26:06.603195 containerd[1823]: time="2025-05-08T01:26:06.603168271Z" level=info msg="StartContainer for \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\" returns successfully" May 8 01:26:06.604069 systemd[1]: cri-containerd-a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e.scope: Deactivated successfully. May 8 01:26:06.615104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e-rootfs.mount: Deactivated successfully. May 8 01:26:06.705822 containerd[1823]: time="2025-05-08T01:26:06.705786193Z" level=info msg="shim disconnected" id=a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e namespace=k8s.io May 8 01:26:06.705822 containerd[1823]: time="2025-05-08T01:26:06.705819177Z" level=warning msg="cleaning up after shim disconnected" id=a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e namespace=k8s.io May 8 01:26:06.705822 containerd[1823]: time="2025-05-08T01:26:06.705825543Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:26:07.559946 containerd[1823]: time="2025-05-08T01:26:07.559857634Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 01:26:07.568737 containerd[1823]: time="2025-05-08T01:26:07.568691993Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\"" May 8 01:26:07.569067 containerd[1823]: time="2025-05-08T01:26:07.568985773Z" level=info msg="StartContainer for \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\"" May 8 01:26:07.596858 systemd[1]: Started cri-containerd-6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d.scope - libcontainer container 6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d. May 8 01:26:07.608755 systemd[1]: cri-containerd-6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d.scope: Deactivated successfully. May 8 01:26:07.609236 containerd[1823]: time="2025-05-08T01:26:07.609203049Z" level=info msg="StartContainer for \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\" returns successfully" May 8 01:26:07.618461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d-rootfs.mount: Deactivated successfully. May 8 01:26:07.619750 containerd[1823]: time="2025-05-08T01:26:07.619720501Z" level=info msg="shim disconnected" id=6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d namespace=k8s.io May 8 01:26:07.619814 containerd[1823]: time="2025-05-08T01:26:07.619750088Z" level=warning msg="cleaning up after shim disconnected" id=6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d namespace=k8s.io May 8 01:26:07.619814 containerd[1823]: time="2025-05-08T01:26:07.619755192Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:26:08.568926 containerd[1823]: time="2025-05-08T01:26:08.568802657Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 01:26:08.581228 containerd[1823]: time="2025-05-08T01:26:08.581171997Z" level=info msg="CreateContainer within sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\"" May 8 01:26:08.581586 containerd[1823]: time="2025-05-08T01:26:08.581543651Z" level=info msg="StartContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\"" May 8 01:26:08.607995 systemd[1]: Started cri-containerd-7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc.scope - libcontainer container 7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc. May 8 01:26:08.667860 containerd[1823]: time="2025-05-08T01:26:08.667795629Z" level=info msg="StartContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" returns successfully" May 8 01:26:08.781913 kubelet[3312]: I0508 01:26:08.781897 3312 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 01:26:08.791465 kubelet[3312]: I0508 01:26:08.791442 3312 topology_manager.go:215] "Topology Admit Handler" podUID="83fea18c-92f1-4836-b020-15a29f4e8341" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nqj8j" May 8 01:26:08.791577 kubelet[3312]: I0508 01:26:08.791570 3312 topology_manager.go:215] "Topology Admit Handler" podUID="9108a005-d2fb-4301-82c6-4004605f6575" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4rk8s" May 8 01:26:08.795176 systemd[1]: Created slice kubepods-burstable-pod83fea18c_92f1_4836_b020_15a29f4e8341.slice - libcontainer container kubepods-burstable-pod83fea18c_92f1_4836_b020_15a29f4e8341.slice. May 8 01:26:08.798070 systemd[1]: Created slice kubepods-burstable-pod9108a005_d2fb_4301_82c6_4004605f6575.slice - libcontainer container kubepods-burstable-pod9108a005_d2fb_4301_82c6_4004605f6575.slice. May 8 01:26:08.856415 kubelet[3312]: I0508 01:26:08.856362 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83fea18c-92f1-4836-b020-15a29f4e8341-config-volume\") pod \"coredns-7db6d8ff4d-nqj8j\" (UID: \"83fea18c-92f1-4836-b020-15a29f4e8341\") " pod="kube-system/coredns-7db6d8ff4d-nqj8j" May 8 01:26:08.856415 kubelet[3312]: I0508 01:26:08.856386 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k8jx\" (UniqueName: \"kubernetes.io/projected/83fea18c-92f1-4836-b020-15a29f4e8341-kube-api-access-9k8jx\") pod \"coredns-7db6d8ff4d-nqj8j\" (UID: \"83fea18c-92f1-4836-b020-15a29f4e8341\") " pod="kube-system/coredns-7db6d8ff4d-nqj8j" May 8 01:26:08.856415 kubelet[3312]: I0508 01:26:08.856404 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9108a005-d2fb-4301-82c6-4004605f6575-config-volume\") pod \"coredns-7db6d8ff4d-4rk8s\" (UID: \"9108a005-d2fb-4301-82c6-4004605f6575\") " pod="kube-system/coredns-7db6d8ff4d-4rk8s" May 8 01:26:08.856529 kubelet[3312]: I0508 01:26:08.856419 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cth\" (UniqueName: \"kubernetes.io/projected/9108a005-d2fb-4301-82c6-4004605f6575-kube-api-access-j8cth\") pod \"coredns-7db6d8ff4d-4rk8s\" (UID: \"9108a005-d2fb-4301-82c6-4004605f6575\") " pod="kube-system/coredns-7db6d8ff4d-4rk8s" May 8 01:26:09.097988 containerd[1823]: time="2025-05-08T01:26:09.097865462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nqj8j,Uid:83fea18c-92f1-4836-b020-15a29f4e8341,Namespace:kube-system,Attempt:0,}" May 8 01:26:09.101186 containerd[1823]: time="2025-05-08T01:26:09.101110930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rk8s,Uid:9108a005-d2fb-4301-82c6-4004605f6575,Namespace:kube-system,Attempt:0,}" May 8 01:26:09.590967 kubelet[3312]: I0508 01:26:09.590934 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9ggq6" podStartSLOduration=6.385537975 podStartE2EDuration="10.590922246s" podCreationTimestamp="2025-05-08 01:25:59 +0000 UTC" firstStartedPulling="2025-05-08 01:25:59.382320577 +0000 UTC m=+14.938345605" lastFinishedPulling="2025-05-08 01:26:03.587704848 +0000 UTC m=+19.143729876" observedRunningTime="2025-05-08 01:26:09.590764444 +0000 UTC m=+25.146789472" watchObservedRunningTime="2025-05-08 01:26:09.590922246 +0000 UTC m=+25.146947270" May 8 01:26:10.473103 systemd-networkd[1735]: cilium_host: Link UP May 8 01:26:10.473268 systemd-networkd[1735]: cilium_net: Link UP May 8 01:26:10.473449 systemd-networkd[1735]: cilium_net: Gained carrier May 8 01:26:10.473641 systemd-networkd[1735]: cilium_host: Gained carrier May 8 01:26:10.522334 systemd-networkd[1735]: cilium_vxlan: Link UP May 8 01:26:10.522339 systemd-networkd[1735]: cilium_vxlan: Gained carrier May 8 01:26:10.659530 kernel: NET: Registered PF_ALG protocol family May 8 01:26:10.860679 systemd-networkd[1735]: cilium_host: Gained IPv6LL May 8 01:26:11.061197 systemd-networkd[1735]: lxc_health: Link UP May 8 01:26:11.061439 systemd-networkd[1735]: lxc_health: Gained carrier May 8 01:26:11.107640 systemd-networkd[1735]: cilium_net: Gained IPv6LL May 8 01:26:11.157508 kernel: eth0: renamed from tmp3f5ae May 8 01:26:11.175561 kernel: eth0: renamed from tmp7b279 May 8 01:26:11.192319 systemd-networkd[1735]: lxcc2209e4cb934: Link UP May 8 01:26:11.192609 systemd-networkd[1735]: lxc43489fee6d88: Link UP May 8 01:26:11.192987 systemd-networkd[1735]: lxc43489fee6d88: Gained carrier May 8 01:26:11.193114 systemd-networkd[1735]: lxcc2209e4cb934: Gained carrier May 8 01:26:11.811724 systemd-networkd[1735]: cilium_vxlan: Gained IPv6LL May 8 01:26:12.575643 kubelet[3312]: I0508 01:26:12.575602 3312 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 01:26:13.027667 systemd-networkd[1735]: lxc_health: Gained IPv6LL May 8 01:26:13.027880 systemd-networkd[1735]: lxc43489fee6d88: Gained IPv6LL May 8 01:26:13.027987 systemd-networkd[1735]: lxcc2209e4cb934: Gained IPv6LL May 8 01:26:13.460328 containerd[1823]: time="2025-05-08T01:26:13.460279538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:26:13.460328 containerd[1823]: time="2025-05-08T01:26:13.460311820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:26:13.460328 containerd[1823]: time="2025-05-08T01:26:13.460319199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:26:13.460630 containerd[1823]: time="2025-05-08T01:26:13.460370744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:26:13.460630 containerd[1823]: time="2025-05-08T01:26:13.460586044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:26:13.460630 containerd[1823]: time="2025-05-08T01:26:13.460611824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:26:13.460680 containerd[1823]: time="2025-05-08T01:26:13.460629405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:26:13.460813 containerd[1823]: time="2025-05-08T01:26:13.460791502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:26:13.476797 systemd[1]: Started cri-containerd-3f5aed02ee5c53919d9f86321d02a478b2a3575a7a58654801b9910660eaeb65.scope - libcontainer container 3f5aed02ee5c53919d9f86321d02a478b2a3575a7a58654801b9910660eaeb65. May 8 01:26:13.477526 systemd[1]: Started cri-containerd-7b279214e72d698201c15c41c18fd7c4e44bb665253b886de733db796c43f694.scope - libcontainer container 7b279214e72d698201c15c41c18fd7c4e44bb665253b886de733db796c43f694. May 8 01:26:13.499364 containerd[1823]: time="2025-05-08T01:26:13.499340991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nqj8j,Uid:83fea18c-92f1-4836-b020-15a29f4e8341,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5aed02ee5c53919d9f86321d02a478b2a3575a7a58654801b9910660eaeb65\"" May 8 01:26:13.499447 containerd[1823]: time="2025-05-08T01:26:13.499343011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rk8s,Uid:9108a005-d2fb-4301-82c6-4004605f6575,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b279214e72d698201c15c41c18fd7c4e44bb665253b886de733db796c43f694\"" May 8 01:26:13.500504 containerd[1823]: time="2025-05-08T01:26:13.500487994Z" level=info msg="CreateContainer within sandbox \"3f5aed02ee5c53919d9f86321d02a478b2a3575a7a58654801b9910660eaeb65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 01:26:13.500536 containerd[1823]: time="2025-05-08T01:26:13.500524256Z" level=info msg="CreateContainer within sandbox \"7b279214e72d698201c15c41c18fd7c4e44bb665253b886de733db796c43f694\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 01:26:13.507945 containerd[1823]: time="2025-05-08T01:26:13.507925714Z" level=info msg="CreateContainer within sandbox \"7b279214e72d698201c15c41c18fd7c4e44bb665253b886de733db796c43f694\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"839a226fa83f731b35d2ad995e6ac3ba6bcb8dab212c17679af248c8b0851cfb\"" May 8 01:26:13.508204 containerd[1823]: time="2025-05-08T01:26:13.508158027Z" level=info msg="StartContainer for \"839a226fa83f731b35d2ad995e6ac3ba6bcb8dab212c17679af248c8b0851cfb\"" May 8 01:26:13.508867 containerd[1823]: time="2025-05-08T01:26:13.508852778Z" level=info msg="CreateContainer within sandbox \"3f5aed02ee5c53919d9f86321d02a478b2a3575a7a58654801b9910660eaeb65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b123955d76641aeb6af4b52aab48b0b6589ac53ccfe2e48229f0999ade93e150\"" May 8 01:26:13.509063 containerd[1823]: time="2025-05-08T01:26:13.509050985Z" level=info msg="StartContainer for \"b123955d76641aeb6af4b52aab48b0b6589ac53ccfe2e48229f0999ade93e150\"" May 8 01:26:13.509339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045523600.mount: Deactivated successfully. May 8 01:26:13.532667 systemd[1]: Started cri-containerd-839a226fa83f731b35d2ad995e6ac3ba6bcb8dab212c17679af248c8b0851cfb.scope - libcontainer container 839a226fa83f731b35d2ad995e6ac3ba6bcb8dab212c17679af248c8b0851cfb. May 8 01:26:13.533322 systemd[1]: Started cri-containerd-b123955d76641aeb6af4b52aab48b0b6589ac53ccfe2e48229f0999ade93e150.scope - libcontainer container b123955d76641aeb6af4b52aab48b0b6589ac53ccfe2e48229f0999ade93e150. May 8 01:26:13.547073 containerd[1823]: time="2025-05-08T01:26:13.547048162Z" level=info msg="StartContainer for \"839a226fa83f731b35d2ad995e6ac3ba6bcb8dab212c17679af248c8b0851cfb\" returns successfully" May 8 01:26:13.547073 containerd[1823]: time="2025-05-08T01:26:13.547074601Z" level=info msg="StartContainer for \"b123955d76641aeb6af4b52aab48b0b6589ac53ccfe2e48229f0999ade93e150\" returns successfully" May 8 01:26:13.585030 kubelet[3312]: I0508 01:26:13.584969 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4rk8s" podStartSLOduration=14.584955296 podStartE2EDuration="14.584955296s" podCreationTimestamp="2025-05-08 01:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:26:13.584762824 +0000 UTC m=+29.140787865" watchObservedRunningTime="2025-05-08 01:26:13.584955296 +0000 UTC m=+29.140980325" May 8 01:26:13.590209 kubelet[3312]: I0508 01:26:13.590165 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nqj8j" podStartSLOduration=14.590145431 podStartE2EDuration="14.590145431s" podCreationTimestamp="2025-05-08 01:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:26:13.590010075 +0000 UTC m=+29.146035108" watchObservedRunningTime="2025-05-08 01:26:13.590145431 +0000 UTC m=+29.146170464" May 8 01:26:26.163126 kubelet[3312]: I0508 01:26:26.162997 3312 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 01:29:13.826719 systemd[1]: Started sshd@9-145.40.90.133:22-80.94.95.115:40596.service - OpenSSH per-connection server daemon (80.94.95.115:40596). May 8 01:29:15.831479 sshd[4905]: Connection closed by authenticating user root 80.94.95.115 port 40596 [preauth] May 8 01:29:15.834829 systemd[1]: sshd@9-145.40.90.133:22-80.94.95.115:40596.service: Deactivated successfully. May 8 01:31:42.346513 systemd[1]: Started sshd@10-145.40.90.133:22-147.75.109.163:51162.service - OpenSSH per-connection server daemon (147.75.109.163:51162). May 8 01:31:42.375020 sshd[4929]: Accepted publickey for core from 147.75.109.163 port 51162 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:42.375882 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:42.379181 systemd-logind[1805]: New session 12 of user core. May 8 01:31:42.386594 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 01:31:42.488629 sshd[4931]: Connection closed by 147.75.109.163 port 51162 May 8 01:31:42.488837 sshd-session[4929]: pam_unix(sshd:session): session closed for user core May 8 01:31:42.490630 systemd[1]: sshd@10-145.40.90.133:22-147.75.109.163:51162.service: Deactivated successfully. May 8 01:31:42.491527 systemd[1]: session-12.scope: Deactivated successfully. May 8 01:31:42.491930 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. May 8 01:31:42.492360 systemd-logind[1805]: Removed session 12. May 8 01:31:47.517816 systemd[1]: Started sshd@11-145.40.90.133:22-147.75.109.163:60442.service - OpenSSH per-connection server daemon (147.75.109.163:60442). May 8 01:31:47.544770 sshd[4963]: Accepted publickey for core from 147.75.109.163 port 60442 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:47.545531 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:47.548889 systemd-logind[1805]: New session 13 of user core. May 8 01:31:47.557717 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 01:31:47.645276 sshd[4965]: Connection closed by 147.75.109.163 port 60442 May 8 01:31:47.645437 sshd-session[4963]: pam_unix(sshd:session): session closed for user core May 8 01:31:47.647057 systemd[1]: sshd@11-145.40.90.133:22-147.75.109.163:60442.service: Deactivated successfully. May 8 01:31:47.648043 systemd[1]: session-13.scope: Deactivated successfully. May 8 01:31:47.648798 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. May 8 01:31:47.649341 systemd-logind[1805]: Removed session 13. May 8 01:31:52.670153 systemd[1]: Started sshd@12-145.40.90.133:22-147.75.109.163:60458.service - OpenSSH per-connection server daemon (147.75.109.163:60458). May 8 01:31:52.698582 sshd[4991]: Accepted publickey for core from 147.75.109.163 port 60458 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:52.699354 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:52.702394 systemd-logind[1805]: New session 14 of user core. May 8 01:31:52.711779 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 01:31:52.802180 sshd[4993]: Connection closed by 147.75.109.163 port 60458 May 8 01:31:52.802396 sshd-session[4991]: pam_unix(sshd:session): session closed for user core May 8 01:31:52.804243 systemd[1]: sshd@12-145.40.90.133:22-147.75.109.163:60458.service: Deactivated successfully. May 8 01:31:52.805333 systemd[1]: session-14.scope: Deactivated successfully. May 8 01:31:52.806148 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. May 8 01:31:52.806839 systemd-logind[1805]: Removed session 14. May 8 01:31:57.830762 systemd[1]: Started sshd@13-145.40.90.133:22-147.75.109.163:60402.service - OpenSSH per-connection server daemon (147.75.109.163:60402). May 8 01:31:57.856895 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 60402 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:57.857620 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:57.860449 systemd-logind[1805]: New session 15 of user core. May 8 01:31:57.873758 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 01:31:57.960257 sshd[5021]: Connection closed by 147.75.109.163 port 60402 May 8 01:31:57.960414 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 8 01:31:57.985549 systemd[1]: sshd@13-145.40.90.133:22-147.75.109.163:60402.service: Deactivated successfully. May 8 01:31:57.989772 systemd[1]: session-15.scope: Deactivated successfully. May 8 01:31:57.993221 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. May 8 01:31:58.010287 systemd[1]: Started sshd@14-145.40.90.133:22-147.75.109.163:60406.service - OpenSSH per-connection server daemon (147.75.109.163:60406). May 8 01:31:58.012796 systemd-logind[1805]: Removed session 15. May 8 01:31:58.065486 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 60406 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:58.066555 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:58.070769 systemd-logind[1805]: New session 16 of user core. May 8 01:31:58.081956 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 01:31:58.285518 sshd[5050]: Connection closed by 147.75.109.163 port 60406 May 8 01:31:58.285809 sshd-session[5045]: pam_unix(sshd:session): session closed for user core May 8 01:31:58.303636 systemd[1]: sshd@14-145.40.90.133:22-147.75.109.163:60406.service: Deactivated successfully. May 8 01:31:58.304894 systemd[1]: session-16.scope: Deactivated successfully. May 8 01:31:58.305978 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. May 8 01:31:58.306997 systemd[1]: Started sshd@15-145.40.90.133:22-147.75.109.163:60416.service - OpenSSH per-connection server daemon (147.75.109.163:60416). May 8 01:31:58.307794 systemd-logind[1805]: Removed session 16. May 8 01:31:58.343044 sshd[5073]: Accepted publickey for core from 147.75.109.163 port 60416 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:31:58.344023 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:31:58.348067 systemd-logind[1805]: New session 17 of user core. May 8 01:31:58.372957 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 01:31:58.507664 sshd[5077]: Connection closed by 147.75.109.163 port 60416 May 8 01:31:58.507845 sshd-session[5073]: pam_unix(sshd:session): session closed for user core May 8 01:31:58.509418 systemd[1]: sshd@15-145.40.90.133:22-147.75.109.163:60416.service: Deactivated successfully. May 8 01:31:58.510356 systemd[1]: session-17.scope: Deactivated successfully. May 8 01:31:58.511115 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. May 8 01:31:58.511733 systemd-logind[1805]: Removed session 17. May 8 01:32:03.544700 systemd[1]: Started sshd@16-145.40.90.133:22-147.75.109.163:60432.service - OpenSSH per-connection server daemon (147.75.109.163:60432). May 8 01:32:03.571607 sshd[5106]: Accepted publickey for core from 147.75.109.163 port 60432 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:03.572403 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:03.575581 systemd-logind[1805]: New session 18 of user core. May 8 01:32:03.594703 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 01:32:03.685026 sshd[5108]: Connection closed by 147.75.109.163 port 60432 May 8 01:32:03.685225 sshd-session[5106]: pam_unix(sshd:session): session closed for user core May 8 01:32:03.686878 systemd[1]: sshd@16-145.40.90.133:22-147.75.109.163:60432.service: Deactivated successfully. May 8 01:32:03.687910 systemd[1]: session-18.scope: Deactivated successfully. May 8 01:32:03.688693 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. May 8 01:32:03.689324 systemd-logind[1805]: Removed session 18. May 8 01:32:08.695574 systemd[1]: Started sshd@17-145.40.90.133:22-147.75.109.163:56204.service - OpenSSH per-connection server daemon (147.75.109.163:56204). May 8 01:32:08.724004 sshd[5133]: Accepted publickey for core from 147.75.109.163 port 56204 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:08.724787 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:08.728059 systemd-logind[1805]: New session 19 of user core. May 8 01:32:08.739771 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 01:32:08.832151 sshd[5135]: Connection closed by 147.75.109.163 port 56204 May 8 01:32:08.832334 sshd-session[5133]: pam_unix(sshd:session): session closed for user core May 8 01:32:08.857525 systemd[1]: sshd@17-145.40.90.133:22-147.75.109.163:56204.service: Deactivated successfully. May 8 01:32:08.861591 systemd[1]: session-19.scope: Deactivated successfully. May 8 01:32:08.864993 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. May 8 01:32:08.879284 systemd[1]: Started sshd@18-145.40.90.133:22-147.75.109.163:56208.service - OpenSSH per-connection server daemon (147.75.109.163:56208). May 8 01:32:08.881944 systemd-logind[1805]: Removed session 19. May 8 01:32:08.931958 sshd[5159]: Accepted publickey for core from 147.75.109.163 port 56208 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:08.932832 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:08.936425 systemd-logind[1805]: New session 20 of user core. May 8 01:32:08.947736 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 01:32:09.207615 sshd[5164]: Connection closed by 147.75.109.163 port 56208 May 8 01:32:09.208272 sshd-session[5159]: pam_unix(sshd:session): session closed for user core May 8 01:32:09.227749 systemd[1]: sshd@18-145.40.90.133:22-147.75.109.163:56208.service: Deactivated successfully. May 8 01:32:09.231777 systemd[1]: session-20.scope: Deactivated successfully. May 8 01:32:09.234052 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. May 8 01:32:09.258344 systemd[1]: Started sshd@19-145.40.90.133:22-147.75.109.163:56220.service - OpenSSH per-connection server daemon (147.75.109.163:56220). May 8 01:32:09.260672 systemd-logind[1805]: Removed session 20. May 8 01:32:09.316942 sshd[5186]: Accepted publickey for core from 147.75.109.163 port 56220 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:09.318222 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:09.322805 systemd-logind[1805]: New session 21 of user core. May 8 01:32:09.339765 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 01:32:10.546799 sshd[5189]: Connection closed by 147.75.109.163 port 56220 May 8 01:32:10.547813 sshd-session[5186]: pam_unix(sshd:session): session closed for user core May 8 01:32:10.571867 systemd[1]: sshd@19-145.40.90.133:22-147.75.109.163:56220.service: Deactivated successfully. May 8 01:32:10.576539 systemd[1]: session-21.scope: Deactivated successfully. May 8 01:32:10.579061 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. May 8 01:32:10.593950 systemd[1]: Started sshd@20-145.40.90.133:22-147.75.109.163:56230.service - OpenSSH per-connection server daemon (147.75.109.163:56230). May 8 01:32:10.595302 systemd-logind[1805]: Removed session 21. May 8 01:32:10.635001 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 56230 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:10.635980 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:10.639951 systemd-logind[1805]: New session 22 of user core. May 8 01:32:10.653693 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 01:32:10.859816 sshd[5225]: Connection closed by 147.75.109.163 port 56230 May 8 01:32:10.860052 sshd-session[5220]: pam_unix(sshd:session): session closed for user core May 8 01:32:10.876600 systemd[1]: sshd@20-145.40.90.133:22-147.75.109.163:56230.service: Deactivated successfully. May 8 01:32:10.878009 systemd[1]: session-22.scope: Deactivated successfully. May 8 01:32:10.879198 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. May 8 01:32:10.880241 systemd[1]: Started sshd@21-145.40.90.133:22-147.75.109.163:56242.service - OpenSSH per-connection server daemon (147.75.109.163:56242). May 8 01:32:10.881077 systemd-logind[1805]: Removed session 22. May 8 01:32:10.928428 sshd[5247]: Accepted publickey for core from 147.75.109.163 port 56242 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:10.930148 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:10.936601 systemd-logind[1805]: New session 23 of user core. May 8 01:32:10.948903 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 01:32:11.079628 sshd[5252]: Connection closed by 147.75.109.163 port 56242 May 8 01:32:11.079980 sshd-session[5247]: pam_unix(sshd:session): session closed for user core May 8 01:32:11.081868 systemd[1]: sshd@21-145.40.90.133:22-147.75.109.163:56242.service: Deactivated successfully. May 8 01:32:11.082726 systemd[1]: session-23.scope: Deactivated successfully. May 8 01:32:11.083103 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. May 8 01:32:11.083535 systemd-logind[1805]: Removed session 23. May 8 01:32:16.111806 systemd[1]: Started sshd@22-145.40.90.133:22-147.75.109.163:56254.service - OpenSSH per-connection server daemon (147.75.109.163:56254). May 8 01:32:16.138501 sshd[5283]: Accepted publickey for core from 147.75.109.163 port 56254 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:16.139188 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:16.142270 systemd-logind[1805]: New session 24 of user core. May 8 01:32:16.153748 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 01:32:16.238842 sshd[5285]: Connection closed by 147.75.109.163 port 56254 May 8 01:32:16.239024 sshd-session[5283]: pam_unix(sshd:session): session closed for user core May 8 01:32:16.240643 systemd[1]: sshd@22-145.40.90.133:22-147.75.109.163:56254.service: Deactivated successfully. May 8 01:32:16.241573 systemd[1]: session-24.scope: Deactivated successfully. May 8 01:32:16.242273 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. May 8 01:32:16.242953 systemd-logind[1805]: Removed session 24. May 8 01:32:21.263793 systemd[1]: Started sshd@23-145.40.90.133:22-147.75.109.163:50236.service - OpenSSH per-connection server daemon (147.75.109.163:50236). May 8 01:32:21.290775 sshd[5309]: Accepted publickey for core from 147.75.109.163 port 50236 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:21.294019 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:21.306168 systemd-logind[1805]: New session 25 of user core. May 8 01:32:21.328974 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 01:32:21.424063 sshd[5311]: Connection closed by 147.75.109.163 port 50236 May 8 01:32:21.424255 sshd-session[5309]: pam_unix(sshd:session): session closed for user core May 8 01:32:21.425918 systemd[1]: sshd@23-145.40.90.133:22-147.75.109.163:50236.service: Deactivated successfully. May 8 01:32:21.426865 systemd[1]: session-25.scope: Deactivated successfully. May 8 01:32:21.427551 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. May 8 01:32:21.428260 systemd-logind[1805]: Removed session 25. May 8 01:32:26.442253 systemd[1]: Started sshd@24-145.40.90.133:22-147.75.109.163:50238.service - OpenSSH per-connection server daemon (147.75.109.163:50238). May 8 01:32:26.469639 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 50238 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:26.470317 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:26.473131 systemd-logind[1805]: New session 26 of user core. May 8 01:32:26.481730 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 01:32:26.569335 sshd[5338]: Connection closed by 147.75.109.163 port 50238 May 8 01:32:26.569712 sshd-session[5336]: pam_unix(sshd:session): session closed for user core May 8 01:32:26.584625 systemd[1]: sshd@24-145.40.90.133:22-147.75.109.163:50238.service: Deactivated successfully. May 8 01:32:26.585384 systemd[1]: session-26.scope: Deactivated successfully. May 8 01:32:26.585846 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. May 8 01:32:26.586726 systemd[1]: Started sshd@25-145.40.90.133:22-147.75.109.163:50246.service - OpenSSH per-connection server daemon (147.75.109.163:50246). May 8 01:32:26.587209 systemd-logind[1805]: Removed session 26. May 8 01:32:26.614894 sshd[5362]: Accepted publickey for core from 147.75.109.163 port 50246 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:26.615479 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:26.618130 systemd-logind[1805]: New session 27 of user core. May 8 01:32:26.630654 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 01:32:27.999701 containerd[1823]: time="2025-05-08T01:32:27.997812913Z" level=info msg="StopContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" with timeout 30 (s)" May 8 01:32:27.999701 containerd[1823]: time="2025-05-08T01:32:27.998984089Z" level=info msg="Stop container \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" with signal terminated" May 8 01:32:28.013559 systemd[1]: cri-containerd-1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495.scope: Deactivated successfully. May 8 01:32:28.031072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495-rootfs.mount: Deactivated successfully. May 8 01:32:28.034069 containerd[1823]: time="2025-05-08T01:32:28.034050697Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 01:32:28.037189 containerd[1823]: time="2025-05-08T01:32:28.037174487Z" level=info msg="StopContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" with timeout 2 (s)" May 8 01:32:28.037281 containerd[1823]: time="2025-05-08T01:32:28.037271607Z" level=info msg="Stop container \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" with signal terminated" May 8 01:32:28.040356 systemd-networkd[1735]: lxc_health: Link DOWN May 8 01:32:28.040358 systemd-networkd[1735]: lxc_health: Lost carrier May 8 01:32:28.053992 containerd[1823]: time="2025-05-08T01:32:28.053962695Z" level=info msg="shim disconnected" id=1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495 namespace=k8s.io May 8 01:32:28.053992 containerd[1823]: time="2025-05-08T01:32:28.053990954Z" level=warning msg="cleaning up after shim disconnected" id=1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495 namespace=k8s.io May 8 01:32:28.054075 containerd[1823]: time="2025-05-08T01:32:28.053998284Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:28.061442 containerd[1823]: time="2025-05-08T01:32:28.061421890Z" level=info msg="StopContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" returns successfully" May 8 01:32:28.061918 containerd[1823]: time="2025-05-08T01:32:28.061876805Z" level=info msg="StopPodSandbox for \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\"" May 8 01:32:28.061918 containerd[1823]: time="2025-05-08T01:32:28.061898753Z" level=info msg="Container to stop \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.063176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c-shm.mount: Deactivated successfully. May 8 01:32:28.065189 systemd[1]: cri-containerd-17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c.scope: Deactivated successfully. May 8 01:32:28.071206 systemd[1]: cri-containerd-7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc.scope: Deactivated successfully. May 8 01:32:28.071442 systemd[1]: cri-containerd-7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc.scope: Consumed 6.357s CPU time, 171.9M memory peak, 136K read from disk, 13.3M written to disk. May 8 01:32:28.074252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c-rootfs.mount: Deactivated successfully. May 8 01:32:28.080174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc-rootfs.mount: Deactivated successfully. May 8 01:32:28.092157 containerd[1823]: time="2025-05-08T01:32:28.092119923Z" level=info msg="shim disconnected" id=17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c namespace=k8s.io May 8 01:32:28.092226 containerd[1823]: time="2025-05-08T01:32:28.092156801Z" level=warning msg="cleaning up after shim disconnected" id=17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c namespace=k8s.io May 8 01:32:28.092226 containerd[1823]: time="2025-05-08T01:32:28.092165100Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:28.098231 containerd[1823]: time="2025-05-08T01:32:28.098179688Z" level=warning msg="cleanup warnings time=\"2025-05-08T01:32:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 01:32:28.098850 containerd[1823]: time="2025-05-08T01:32:28.098810954Z" level=info msg="TearDown network for sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" successfully" May 8 01:32:28.098850 containerd[1823]: time="2025-05-08T01:32:28.098820945Z" level=info msg="StopPodSandbox for \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" returns successfully" May 8 01:32:28.109920 containerd[1823]: time="2025-05-08T01:32:28.109884103Z" level=info msg="shim disconnected" id=7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc namespace=k8s.io May 8 01:32:28.110015 containerd[1823]: time="2025-05-08T01:32:28.109921511Z" level=warning msg="cleaning up after shim disconnected" id=7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc namespace=k8s.io May 8 01:32:28.110015 containerd[1823]: time="2025-05-08T01:32:28.109931471Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:28.119404 containerd[1823]: time="2025-05-08T01:32:28.119373040Z" level=info msg="StopContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" returns successfully" May 8 01:32:28.119766 containerd[1823]: time="2025-05-08T01:32:28.119746288Z" level=info msg="StopPodSandbox for \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\"" May 8 01:32:28.119837 containerd[1823]: time="2025-05-08T01:32:28.119775048Z" level=info msg="Container to stop \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.119837 containerd[1823]: time="2025-05-08T01:32:28.119804766Z" level=info msg="Container to stop \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.119837 containerd[1823]: time="2025-05-08T01:32:28.119812923Z" level=info msg="Container to stop \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.119837 containerd[1823]: time="2025-05-08T01:32:28.119820377Z" level=info msg="Container to stop \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.119837 containerd[1823]: time="2025-05-08T01:32:28.119827482Z" level=info msg="Container to stop \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 01:32:28.124274 systemd[1]: cri-containerd-dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e.scope: Deactivated successfully. May 8 01:32:28.136186 kubelet[3312]: I0508 01:32:28.136147 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmkx\" (UniqueName: \"kubernetes.io/projected/b922d05a-7343-4943-9145-47176afe120b-kube-api-access-pgmkx\") pod \"b922d05a-7343-4943-9145-47176afe120b\" (UID: \"b922d05a-7343-4943-9145-47176afe120b\") " May 8 01:32:28.136539 kubelet[3312]: I0508 01:32:28.136205 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b922d05a-7343-4943-9145-47176afe120b-cilium-config-path\") pod \"b922d05a-7343-4943-9145-47176afe120b\" (UID: \"b922d05a-7343-4943-9145-47176afe120b\") " May 8 01:32:28.138072 kubelet[3312]: I0508 01:32:28.138037 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b922d05a-7343-4943-9145-47176afe120b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b922d05a-7343-4943-9145-47176afe120b" (UID: "b922d05a-7343-4943-9145-47176afe120b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 01:32:28.162954 kubelet[3312]: I0508 01:32:28.162908 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b922d05a-7343-4943-9145-47176afe120b-kube-api-access-pgmkx" (OuterVolumeSpecName: "kube-api-access-pgmkx") pod "b922d05a-7343-4943-9145-47176afe120b" (UID: "b922d05a-7343-4943-9145-47176afe120b"). InnerVolumeSpecName "kube-api-access-pgmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 01:32:28.189199 containerd[1823]: time="2025-05-08T01:32:28.189134088Z" level=info msg="shim disconnected" id=dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e namespace=k8s.io May 8 01:32:28.189199 containerd[1823]: time="2025-05-08T01:32:28.189188841Z" level=warning msg="cleaning up after shim disconnected" id=dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e namespace=k8s.io May 8 01:32:28.189434 containerd[1823]: time="2025-05-08T01:32:28.189206252Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:28.205156 containerd[1823]: time="2025-05-08T01:32:28.205081892Z" level=info msg="TearDown network for sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" successfully" May 8 01:32:28.205156 containerd[1823]: time="2025-05-08T01:32:28.205143700Z" level=info msg="StopPodSandbox for \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" returns successfully" May 8 01:32:28.236649 kubelet[3312]: I0508 01:32:28.236577 3312 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b922d05a-7343-4943-9145-47176afe120b-cilium-config-path\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.236649 kubelet[3312]: I0508 01:32:28.236642 3312 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pgmkx\" (UniqueName: \"kubernetes.io/projected/b922d05a-7343-4943-9145-47176afe120b-kube-api-access-pgmkx\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.336988 kubelet[3312]: I0508 01:32:28.336874 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-lib-modules\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.336988 kubelet[3312]: I0508 01:32:28.336963 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cni-path\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.337371 kubelet[3312]: I0508 01:32:28.337037 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4068de-4aa8-412c-81ba-a166136a59c4-clustermesh-secrets\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.337371 kubelet[3312]: I0508 01:32:28.337029 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.337371 kubelet[3312]: I0508 01:32:28.337086 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-kernel\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.337371 kubelet[3312]: I0508 01:32:28.337137 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-run\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.337371 kubelet[3312]: I0508 01:32:28.337155 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.337909 kubelet[3312]: I0508 01:32:28.337147 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.337909 kubelet[3312]: I0508 01:32:28.337211 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.337909 kubelet[3312]: I0508 01:32:28.337181 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-hostproc\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.337909 kubelet[3312]: I0508 01:32:28.337239 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.337909 kubelet[3312]: I0508 01:32:28.337314 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-hubble-tls\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337370 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-xtables-lock\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337414 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-bpf-maps\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337450 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337461 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-net\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337563 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-config-path\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338389 kubelet[3312]: I0508 01:32:28.337611 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-etc-cni-netd\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338958 kubelet[3312]: I0508 01:32:28.337600 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.338958 kubelet[3312]: I0508 01:32:28.337632 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.338958 kubelet[3312]: I0508 01:32:28.337696 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.338958 kubelet[3312]: I0508 01:32:28.337654 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-cgroup\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.338958 kubelet[3312]: I0508 01:32:28.337785 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.337852 3312 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9sdk\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-kube-api-access-k9sdk\") pod \"6b4068de-4aa8-412c-81ba-a166136a59c4\" (UID: \"6b4068de-4aa8-412c-81ba-a166136a59c4\") " May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.337956 3312 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-lib-modules\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.337991 3312 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cni-path\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.338022 3312 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.338050 3312 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-run\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.338077 3312 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-hostproc\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.338101 3312 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-xtables-lock\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.339418 kubelet[3312]: I0508 01:32:28.338127 3312 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-bpf-maps\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.340153 kubelet[3312]: I0508 01:32:28.338151 3312 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-host-proc-sys-net\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.340153 kubelet[3312]: I0508 01:32:28.338176 3312 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-etc-cni-netd\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.340153 kubelet[3312]: I0508 01:32:28.338200 3312 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-cgroup\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.343524 kubelet[3312]: I0508 01:32:28.343445 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4068de-4aa8-412c-81ba-a166136a59c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 01:32:28.343732 kubelet[3312]: I0508 01:32:28.343461 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 01:32:28.343732 kubelet[3312]: I0508 01:32:28.343543 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 01:32:28.344069 kubelet[3312]: I0508 01:32:28.343784 3312 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-kube-api-access-k9sdk" (OuterVolumeSpecName: "kube-api-access-k9sdk") pod "6b4068de-4aa8-412c-81ba-a166136a59c4" (UID: "6b4068de-4aa8-412c-81ba-a166136a59c4"). InnerVolumeSpecName "kube-api-access-k9sdk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 01:32:28.439433 kubelet[3312]: I0508 01:32:28.439322 3312 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4068de-4aa8-412c-81ba-a166136a59c4-clustermesh-secrets\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.439433 kubelet[3312]: I0508 01:32:28.439390 3312 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-hubble-tls\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.439433 kubelet[3312]: I0508 01:32:28.439420 3312 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4068de-4aa8-412c-81ba-a166136a59c4-cilium-config-path\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.439433 kubelet[3312]: I0508 01:32:28.439448 3312 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k9sdk\" (UniqueName: \"kubernetes.io/projected/6b4068de-4aa8-412c-81ba-a166136a59c4-kube-api-access-k9sdk\") on node \"ci-4230.1.1-n-cd63e3b163\" DevicePath \"\"" May 8 01:32:28.506080 systemd[1]: Removed slice kubepods-burstable-pod6b4068de_4aa8_412c_81ba_a166136a59c4.slice - libcontainer container kubepods-burstable-pod6b4068de_4aa8_412c_81ba_a166136a59c4.slice. May 8 01:32:28.506379 systemd[1]: kubepods-burstable-pod6b4068de_4aa8_412c_81ba_a166136a59c4.slice: Consumed 6.398s CPU time, 172.5M memory peak, 136K read from disk, 13.3M written to disk. May 8 01:32:28.509382 systemd[1]: Removed slice kubepods-besteffort-podb922d05a_7343_4943_9145_47176afe120b.slice - libcontainer container kubepods-besteffort-podb922d05a_7343_4943_9145_47176afe120b.slice. May 8 01:32:28.641998 kubelet[3312]: I0508 01:32:28.641766 3312 scope.go:117] "RemoveContainer" containerID="7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc" May 8 01:32:28.644711 containerd[1823]: time="2025-05-08T01:32:28.644633408Z" level=info msg="RemoveContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\"" May 8 01:32:28.657007 containerd[1823]: time="2025-05-08T01:32:28.656964813Z" level=info msg="RemoveContainer for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" returns successfully" May 8 01:32:28.657136 kubelet[3312]: I0508 01:32:28.657096 3312 scope.go:117] "RemoveContainer" containerID="6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d" May 8 01:32:28.657743 containerd[1823]: time="2025-05-08T01:32:28.657725790Z" level=info msg="RemoveContainer for \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\"" May 8 01:32:28.659383 containerd[1823]: time="2025-05-08T01:32:28.659367534Z" level=info msg="RemoveContainer for \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\" returns successfully" May 8 01:32:28.659472 kubelet[3312]: I0508 01:32:28.659454 3312 scope.go:117] "RemoveContainer" containerID="a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e" May 8 01:32:28.659911 containerd[1823]: time="2025-05-08T01:32:28.659898187Z" level=info msg="RemoveContainer for \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\"" May 8 01:32:28.660980 containerd[1823]: time="2025-05-08T01:32:28.660968962Z" level=info msg="RemoveContainer for \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\" returns successfully" May 8 01:32:28.661030 kubelet[3312]: I0508 01:32:28.661022 3312 scope.go:117] "RemoveContainer" containerID="e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192" May 8 01:32:28.661546 containerd[1823]: time="2025-05-08T01:32:28.661535652Z" level=info msg="RemoveContainer for \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\"" May 8 01:32:28.662671 containerd[1823]: time="2025-05-08T01:32:28.662660709Z" level=info msg="RemoveContainer for \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\" returns successfully" May 8 01:32:28.662792 kubelet[3312]: I0508 01:32:28.662748 3312 scope.go:117] "RemoveContainer" containerID="19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb" May 8 01:32:28.663264 containerd[1823]: time="2025-05-08T01:32:28.663254411Z" level=info msg="RemoveContainer for \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\"" May 8 01:32:28.664435 containerd[1823]: time="2025-05-08T01:32:28.664423270Z" level=info msg="RemoveContainer for \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\" returns successfully" May 8 01:32:28.664515 kubelet[3312]: I0508 01:32:28.664505 3312 scope.go:117] "RemoveContainer" containerID="7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc" May 8 01:32:28.664618 containerd[1823]: time="2025-05-08T01:32:28.664601632Z" level=error msg="ContainerStatus for \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\": not found" May 8 01:32:28.664672 kubelet[3312]: E0508 01:32:28.664661 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\": not found" containerID="7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc" May 8 01:32:28.664715 kubelet[3312]: I0508 01:32:28.664678 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc"} err="failed to get container status \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7efcc8d8e657b42d37413b802d241115f2581983787fc76dcc74de4bf53df6dc\": not found" May 8 01:32:28.664734 kubelet[3312]: I0508 01:32:28.664717 3312 scope.go:117] "RemoveContainer" containerID="6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d" May 8 01:32:28.664797 containerd[1823]: time="2025-05-08T01:32:28.664785101Z" level=error msg="ContainerStatus for \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\": not found" May 8 01:32:28.664839 kubelet[3312]: E0508 01:32:28.664830 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\": not found" containerID="6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d" May 8 01:32:28.664866 kubelet[3312]: I0508 01:32:28.664842 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d"} err="failed to get container status \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6de8537ed947fa1168d87988911fd59e3549daa0657644b001f7ebdfef52d62d\": not found" May 8 01:32:28.664866 kubelet[3312]: I0508 01:32:28.664851 3312 scope.go:117] "RemoveContainer" containerID="a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e" May 8 01:32:28.664958 containerd[1823]: time="2025-05-08T01:32:28.664941416Z" level=error msg="ContainerStatus for \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\": not found" May 8 01:32:28.665046 kubelet[3312]: E0508 01:32:28.665037 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\": not found" containerID="a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e" May 8 01:32:28.665065 kubelet[3312]: I0508 01:32:28.665049 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e"} err="failed to get container status \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a83ea826bf9dae237333b400e2228026726390108e2f78fe295dc20db946266e\": not found" May 8 01:32:28.665065 kubelet[3312]: I0508 01:32:28.665059 3312 scope.go:117] "RemoveContainer" containerID="e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192" May 8 01:32:28.665179 containerd[1823]: time="2025-05-08T01:32:28.665167098Z" level=error msg="ContainerStatus for \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\": not found" May 8 01:32:28.665222 kubelet[3312]: E0508 01:32:28.665215 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\": not found" containerID="e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192" May 8 01:32:28.665244 kubelet[3312]: I0508 01:32:28.665225 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192"} err="failed to get container status \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7d856e24c2d3d364596ff60a886f28873b5f87160fafdec780f4c9959fc0192\": not found" May 8 01:32:28.665244 kubelet[3312]: I0508 01:32:28.665232 3312 scope.go:117] "RemoveContainer" containerID="19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb" May 8 01:32:28.665312 containerd[1823]: time="2025-05-08T01:32:28.665300238Z" level=error msg="ContainerStatus for \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\": not found" May 8 01:32:28.665357 kubelet[3312]: E0508 01:32:28.665348 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\": not found" containerID="19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb" May 8 01:32:28.665379 kubelet[3312]: I0508 01:32:28.665359 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb"} err="failed to get container status \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"19a8b6d896739e042a7e7a64ad1bec73c5787da13a5d3e6403f04a36919211eb\": not found" May 8 01:32:28.665379 kubelet[3312]: I0508 01:32:28.665368 3312 scope.go:117] "RemoveContainer" containerID="1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495" May 8 01:32:28.665753 containerd[1823]: time="2025-05-08T01:32:28.665744183Z" level=info msg="RemoveContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\"" May 8 01:32:28.666732 containerd[1823]: time="2025-05-08T01:32:28.666722420Z" level=info msg="RemoveContainer for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" returns successfully" May 8 01:32:28.666775 kubelet[3312]: I0508 01:32:28.666769 3312 scope.go:117] "RemoveContainer" containerID="1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495" May 8 01:32:28.666848 containerd[1823]: time="2025-05-08T01:32:28.666835300Z" level=error msg="ContainerStatus for \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\": not found" May 8 01:32:28.666887 kubelet[3312]: E0508 01:32:28.666880 3312 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\": not found" containerID="1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495" May 8 01:32:28.666912 kubelet[3312]: I0508 01:32:28.666889 3312 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495"} err="failed to get container status \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e919b6fa21f6c4b3e076d23de4df2e832ab978a57788e77bb3ec61ce6fd1495\": not found" May 8 01:32:29.020241 systemd[1]: var-lib-kubelet-pods-b922d05a\x2d7343\x2d4943\x2d9145\x2d47176afe120b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgmkx.mount: Deactivated successfully. May 8 01:32:29.020322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e-rootfs.mount: Deactivated successfully. May 8 01:32:29.020380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e-shm.mount: Deactivated successfully. May 8 01:32:29.020438 systemd[1]: var-lib-kubelet-pods-6b4068de\x2d4aa8\x2d412c\x2d81ba\x2da166136a59c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9sdk.mount: Deactivated successfully. May 8 01:32:29.020590 systemd[1]: var-lib-kubelet-pods-6b4068de\x2d4aa8\x2d412c\x2d81ba\x2da166136a59c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 01:32:29.020704 systemd[1]: var-lib-kubelet-pods-6b4068de\x2d4aa8\x2d412c\x2d81ba\x2da166136a59c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 01:32:29.634413 kubelet[3312]: E0508 01:32:29.634295 3312 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 01:32:29.942765 sshd[5365]: Connection closed by 147.75.109.163 port 50246 May 8 01:32:29.942986 sshd-session[5362]: pam_unix(sshd:session): session closed for user core May 8 01:32:29.963089 systemd[1]: sshd@25-145.40.90.133:22-147.75.109.163:50246.service: Deactivated successfully. May 8 01:32:29.964160 systemd[1]: session-27.scope: Deactivated successfully. May 8 01:32:29.965057 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. May 8 01:32:29.965983 systemd[1]: Started sshd@26-145.40.90.133:22-147.75.109.163:46196.service - OpenSSH per-connection server daemon (147.75.109.163:46196). May 8 01:32:29.966529 systemd-logind[1805]: Removed session 27. May 8 01:32:30.001155 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 46196 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:30.002113 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:30.005956 systemd-logind[1805]: New session 28 of user core. May 8 01:32:30.020721 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 01:32:30.373222 sshd[5547]: Connection closed by 147.75.109.163 port 46196 May 8 01:32:30.373412 sshd-session[5544]: pam_unix(sshd:session): session closed for user core May 8 01:32:30.379725 kubelet[3312]: I0508 01:32:30.379695 3312 topology_manager.go:215] "Topology Admit Handler" podUID="3dc7dfae-0fb6-490f-ae3f-f4d59059de28" podNamespace="kube-system" podName="cilium-2x7n5" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379746 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="mount-cgroup" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379756 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b922d05a-7343-4943-9145-47176afe120b" containerName="cilium-operator" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379762 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="mount-bpf-fs" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379767 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="clean-cilium-state" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379771 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="cilium-agent" May 8 01:32:30.379842 kubelet[3312]: E0508 01:32:30.379775 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="apply-sysctl-overwrites" May 8 01:32:30.379842 kubelet[3312]: I0508 01:32:30.379789 3312 memory_manager.go:354] "RemoveStaleState removing state" podUID="b922d05a-7343-4943-9145-47176afe120b" containerName="cilium-operator" May 8 01:32:30.379842 kubelet[3312]: I0508 01:32:30.379795 3312 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" containerName="cilium-agent" May 8 01:32:30.388103 systemd[1]: sshd@26-145.40.90.133:22-147.75.109.163:46196.service: Deactivated successfully. May 8 01:32:30.391449 systemd[1]: session-28.scope: Deactivated successfully. May 8 01:32:30.393520 systemd-logind[1805]: Session 28 logged out. Waiting for processes to exit. May 8 01:32:30.396142 systemd[1]: Started sshd@27-145.40.90.133:22-147.75.109.163:46210.service - OpenSSH per-connection server daemon (147.75.109.163:46210). May 8 01:32:30.397633 systemd-logind[1805]: Removed session 28. May 8 01:32:30.399561 systemd[1]: Created slice kubepods-burstable-pod3dc7dfae_0fb6_490f_ae3f_f4d59059de28.slice - libcontainer container kubepods-burstable-pod3dc7dfae_0fb6_490f_ae3f_f4d59059de28.slice. May 8 01:32:30.424712 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 46210 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:30.425295 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:30.428203 systemd-logind[1805]: New session 29 of user core. May 8 01:32:30.428820 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 01:32:30.451379 kubelet[3312]: I0508 01:32:30.451340 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-host-proc-sys-kernel\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451379 kubelet[3312]: I0508 01:32:30.451361 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-cilium-run\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451379 kubelet[3312]: I0508 01:32:30.451377 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-bpf-maps\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451464 kubelet[3312]: I0508 01:32:30.451400 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-clustermesh-secrets\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451464 kubelet[3312]: I0508 01:32:30.451425 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-cilium-cgroup\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451464 kubelet[3312]: I0508 01:32:30.451447 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-hubble-tls\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451464 kubelet[3312]: I0508 01:32:30.451462 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-cni-path\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451472 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-lib-modules\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451482 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-xtables-lock\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451492 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm8m\" (UniqueName: \"kubernetes.io/projected/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-kube-api-access-2gm8m\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451508 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-host-proc-sys-net\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451519 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-hostproc\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451549 kubelet[3312]: I0508 01:32:30.451529 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-cilium-config-path\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451654 kubelet[3312]: I0508 01:32:30.451540 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-etc-cni-netd\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.451654 kubelet[3312]: I0508 01:32:30.451549 3312 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3dc7dfae-0fb6-490f-ae3f-f4d59059de28-cilium-ipsec-secrets\") pod \"cilium-2x7n5\" (UID: \"3dc7dfae-0fb6-490f-ae3f-f4d59059de28\") " pod="kube-system/cilium-2x7n5" May 8 01:32:30.475079 sshd[5573]: Connection closed by 147.75.109.163 port 46210 May 8 01:32:30.475355 sshd-session[5569]: pam_unix(sshd:session): session closed for user core May 8 01:32:30.491521 kubelet[3312]: I0508 01:32:30.491453 3312 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4068de-4aa8-412c-81ba-a166136a59c4" path="/var/lib/kubelet/pods/6b4068de-4aa8-412c-81ba-a166136a59c4/volumes" May 8 01:32:30.492365 kubelet[3312]: I0508 01:32:30.492312 3312 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b922d05a-7343-4943-9145-47176afe120b" path="/var/lib/kubelet/pods/b922d05a-7343-4943-9145-47176afe120b/volumes" May 8 01:32:30.502411 systemd[1]: sshd@27-145.40.90.133:22-147.75.109.163:46210.service: Deactivated successfully. May 8 01:32:30.506131 systemd[1]: session-29.scope: Deactivated successfully. May 8 01:32:30.508435 systemd-logind[1805]: Session 29 logged out. Waiting for processes to exit. May 8 01:32:30.529521 systemd[1]: Started sshd@28-145.40.90.133:22-147.75.109.163:46216.service - OpenSSH per-connection server daemon (147.75.109.163:46216). May 8 01:32:30.532219 systemd-logind[1805]: Removed session 29. May 8 01:32:30.587474 sshd[5579]: Accepted publickey for core from 147.75.109.163 port 46216 ssh2: RSA SHA256:dtekfGLQq93glBOUih+Iz+QFyV19jQBd8EMzhR8h1QI May 8 01:32:30.588352 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 01:32:30.591800 systemd-logind[1805]: New session 30 of user core. May 8 01:32:30.607765 systemd[1]: Started session-30.scope - Session 30 of User core. May 8 01:32:30.701625 containerd[1823]: time="2025-05-08T01:32:30.701523061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2x7n5,Uid:3dc7dfae-0fb6-490f-ae3f-f4d59059de28,Namespace:kube-system,Attempt:0,}" May 8 01:32:30.710802 containerd[1823]: time="2025-05-08T01:32:30.710581152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 01:32:30.710802 containerd[1823]: time="2025-05-08T01:32:30.710794199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 01:32:30.710802 containerd[1823]: time="2025-05-08T01:32:30.710802439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:32:30.710906 containerd[1823]: time="2025-05-08T01:32:30.710843343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 01:32:30.734251 systemd[1]: Started cri-containerd-20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108.scope - libcontainer container 20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108. May 8 01:32:30.785373 containerd[1823]: time="2025-05-08T01:32:30.785251157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2x7n5,Uid:3dc7dfae-0fb6-490f-ae3f-f4d59059de28,Namespace:kube-system,Attempt:0,} returns sandbox id \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\"" May 8 01:32:30.790992 containerd[1823]: time="2025-05-08T01:32:30.790887969Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 01:32:30.798992 containerd[1823]: time="2025-05-08T01:32:30.798976103Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00\"" May 8 01:32:30.799226 containerd[1823]: time="2025-05-08T01:32:30.799172044Z" level=info msg="StartContainer for \"8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00\"" May 8 01:32:30.825726 systemd[1]: Started cri-containerd-8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00.scope - libcontainer container 8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00. May 8 01:32:30.842279 systemd[1]: cri-containerd-8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00.scope: Deactivated successfully. May 8 01:32:30.853799 containerd[1823]: time="2025-05-08T01:32:30.853742632Z" level=info msg="StartContainer for \"8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00\" returns successfully" May 8 01:32:30.879867 containerd[1823]: time="2025-05-08T01:32:30.879802845Z" level=info msg="shim disconnected" id=8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00 namespace=k8s.io May 8 01:32:30.879867 containerd[1823]: time="2025-05-08T01:32:30.879833379Z" level=warning msg="cleaning up after shim disconnected" id=8b9a6b3ab038c2da0596c52ca5e9a094b6f6d95efddbb460ba08421ee50feb00 namespace=k8s.io May 8 01:32:30.879867 containerd[1823]: time="2025-05-08T01:32:30.879838364Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:31.671487 containerd[1823]: time="2025-05-08T01:32:31.671397717Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 01:32:31.677998 containerd[1823]: time="2025-05-08T01:32:31.677953096Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48\"" May 8 01:32:31.678379 containerd[1823]: time="2025-05-08T01:32:31.678362996Z" level=info msg="StartContainer for \"712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48\"" May 8 01:32:31.708643 systemd[1]: Started cri-containerd-712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48.scope - libcontainer container 712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48. May 8 01:32:31.721461 containerd[1823]: time="2025-05-08T01:32:31.721414277Z" level=info msg="StartContainer for \"712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48\" returns successfully" May 8 01:32:31.725810 systemd[1]: cri-containerd-712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48.scope: Deactivated successfully. May 8 01:32:31.751635 containerd[1823]: time="2025-05-08T01:32:31.751473020Z" level=info msg="shim disconnected" id=712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48 namespace=k8s.io May 8 01:32:31.751635 containerd[1823]: time="2025-05-08T01:32:31.751632196Z" level=warning msg="cleaning up after shim disconnected" id=712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48 namespace=k8s.io May 8 01:32:31.752174 containerd[1823]: time="2025-05-08T01:32:31.751662882Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:32.562756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712cd53976d13a2c45d5a24418131a5ee5d25492634d6722ee6c9774cb395c48-rootfs.mount: Deactivated successfully. May 8 01:32:32.677809 containerd[1823]: time="2025-05-08T01:32:32.677686070Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 01:32:32.689561 containerd[1823]: time="2025-05-08T01:32:32.689486748Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525\"" May 8 01:32:32.690012 containerd[1823]: time="2025-05-08T01:32:32.689951947Z" level=info msg="StartContainer for \"c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525\"" May 8 01:32:32.722652 systemd[1]: Started cri-containerd-c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525.scope - libcontainer container c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525. May 8 01:32:32.738999 containerd[1823]: time="2025-05-08T01:32:32.738971316Z" level=info msg="StartContainer for \"c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525\" returns successfully" May 8 01:32:32.740813 systemd[1]: cri-containerd-c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525.scope: Deactivated successfully. May 8 01:32:32.756382 containerd[1823]: time="2025-05-08T01:32:32.756348714Z" level=info msg="shim disconnected" id=c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525 namespace=k8s.io May 8 01:32:32.756382 containerd[1823]: time="2025-05-08T01:32:32.756380097Z" level=warning msg="cleaning up after shim disconnected" id=c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525 namespace=k8s.io May 8 01:32:32.756382 containerd[1823]: time="2025-05-08T01:32:32.756385733Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:32.762065 containerd[1823]: time="2025-05-08T01:32:32.762016716Z" level=warning msg="cleanup warnings time=\"2025-05-08T01:32:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 01:32:33.563009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5fdda33c9f950325e7ea36ebee3d671f7003139a133f3402f77262d99243525-rootfs.mount: Deactivated successfully. May 8 01:32:33.677177 containerd[1823]: time="2025-05-08T01:32:33.677152892Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 01:32:33.681502 containerd[1823]: time="2025-05-08T01:32:33.681472178Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d\"" May 8 01:32:33.681923 containerd[1823]: time="2025-05-08T01:32:33.681909594Z" level=info msg="StartContainer for \"f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d\"" May 8 01:32:33.712755 systemd[1]: Started cri-containerd-f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d.scope - libcontainer container f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d. May 8 01:32:33.726997 systemd[1]: cri-containerd-f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d.scope: Deactivated successfully. May 8 01:32:33.727324 containerd[1823]: time="2025-05-08T01:32:33.727238499Z" level=info msg="StartContainer for \"f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d\" returns successfully" May 8 01:32:33.742365 containerd[1823]: time="2025-05-08T01:32:33.742331787Z" level=info msg="shim disconnected" id=f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d namespace=k8s.io May 8 01:32:33.742365 containerd[1823]: time="2025-05-08T01:32:33.742363435Z" level=warning msg="cleaning up after shim disconnected" id=f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d namespace=k8s.io May 8 01:32:33.742365 containerd[1823]: time="2025-05-08T01:32:33.742368491Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 01:32:33.818035 kubelet[3312]: I0508 01:32:33.817807 3312 setters.go:580] "Node became not ready" node="ci-4230.1.1-n-cd63e3b163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T01:32:33Z","lastTransitionTime":"2025-05-08T01:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 01:32:34.563020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f2a9d6e2bc6cc78a907224fdd500271a108d15bd2aa488e5425f68a3e5e32d-rootfs.mount: Deactivated successfully. May 8 01:32:34.636487 kubelet[3312]: E0508 01:32:34.636392 3312 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 01:32:34.691087 containerd[1823]: time="2025-05-08T01:32:34.690567820Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 01:32:34.704429 containerd[1823]: time="2025-05-08T01:32:34.704400457Z" level=info msg="CreateContainer within sandbox \"20498ff4080c349bb65e3e47c44d753683c3fc2fe396cadab5cab9defb838108\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a29d183c2db606ef6c18da2a213214f414c0595d94a6bc8d1e654b9c6c6e41ac\"" May 8 01:32:34.704961 containerd[1823]: time="2025-05-08T01:32:34.704857742Z" level=info msg="StartContainer for \"a29d183c2db606ef6c18da2a213214f414c0595d94a6bc8d1e654b9c6c6e41ac\"" May 8 01:32:34.705474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431836882.mount: Deactivated successfully. May 8 01:32:34.727819 systemd[1]: Started cri-containerd-a29d183c2db606ef6c18da2a213214f414c0595d94a6bc8d1e654b9c6c6e41ac.scope - libcontainer container a29d183c2db606ef6c18da2a213214f414c0595d94a6bc8d1e654b9c6c6e41ac. May 8 01:32:34.740651 containerd[1823]: time="2025-05-08T01:32:34.740626744Z" level=info msg="StartContainer for \"a29d183c2db606ef6c18da2a213214f414c0595d94a6bc8d1e654b9c6c6e41ac\" returns successfully" May 8 01:32:34.898569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 01:32:35.710975 kubelet[3312]: I0508 01:32:35.710920 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2x7n5" podStartSLOduration=5.710909694 podStartE2EDuration="5.710909694s" podCreationTimestamp="2025-05-08 01:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 01:32:35.710646774 +0000 UTC m=+411.266671801" watchObservedRunningTime="2025-05-08 01:32:35.710909694 +0000 UTC m=+411.266934719" May 8 01:32:38.184050 systemd-networkd[1735]: lxc_health: Link UP May 8 01:32:38.184292 systemd-networkd[1735]: lxc_health: Gained carrier May 8 01:32:39.715633 systemd-networkd[1735]: lxc_health: Gained IPv6LL May 8 01:32:44.497477 containerd[1823]: time="2025-05-08T01:32:44.497385467Z" level=info msg="StopPodSandbox for \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\"" May 8 01:32:44.498653 containerd[1823]: time="2025-05-08T01:32:44.497628888Z" level=info msg="TearDown network for sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" successfully" May 8 01:32:44.498653 containerd[1823]: time="2025-05-08T01:32:44.497669611Z" level=info msg="StopPodSandbox for \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" returns successfully" May 8 01:32:44.499107 containerd[1823]: time="2025-05-08T01:32:44.498732697Z" level=info msg="RemovePodSandbox for \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\"" May 8 01:32:44.499107 containerd[1823]: time="2025-05-08T01:32:44.498846688Z" level=info msg="Forcibly stopping sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\"" May 8 01:32:44.499821 containerd[1823]: time="2025-05-08T01:32:44.499047773Z" level=info msg="TearDown network for sandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" successfully" May 8 01:32:44.503333 containerd[1823]: time="2025-05-08T01:32:44.503294450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 01:32:44.503392 containerd[1823]: time="2025-05-08T01:32:44.503358948Z" level=info msg="RemovePodSandbox \"17bc06e23197b68c92ed180cfabbf0a05fbbaadb4d54c35871ecb1f953d0450c\" returns successfully" May 8 01:32:44.503603 containerd[1823]: time="2025-05-08T01:32:44.503591115Z" level=info msg="StopPodSandbox for \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\"" May 8 01:32:44.503644 containerd[1823]: time="2025-05-08T01:32:44.503634350Z" level=info msg="TearDown network for sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" successfully" May 8 01:32:44.503666 containerd[1823]: time="2025-05-08T01:32:44.503643161Z" level=info msg="StopPodSandbox for \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" returns successfully" May 8 01:32:44.503753 containerd[1823]: time="2025-05-08T01:32:44.503741618Z" level=info msg="RemovePodSandbox for \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\"" May 8 01:32:44.503776 containerd[1823]: time="2025-05-08T01:32:44.503755538Z" level=info msg="Forcibly stopping sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\"" May 8 01:32:44.503796 containerd[1823]: time="2025-05-08T01:32:44.503777344Z" level=info msg="TearDown network for sandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" successfully" May 8 01:32:44.504795 containerd[1823]: time="2025-05-08T01:32:44.504783345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 01:32:44.504822 containerd[1823]: time="2025-05-08T01:32:44.504801368Z" level=info msg="RemovePodSandbox \"dc0fab1f83c4b60036c85eb69e3f41070998681df7a3dc54f7fc1c73fc49c58e\" returns successfully" May 8 01:32:45.257034 sshd[5586]: Connection closed by 147.75.109.163 port 46216 May 8 01:32:45.257513 sshd-session[5579]: pam_unix(sshd:session): session closed for user core May 8 01:32:45.261147 systemd[1]: sshd@28-145.40.90.133:22-147.75.109.163:46216.service: Deactivated successfully. May 8 01:32:45.263358 systemd[1]: session-30.scope: Deactivated successfully. May 8 01:32:45.265202 systemd-logind[1805]: Session 30 logged out. Waiting for processes to exit. May 8 01:32:45.266786 systemd-logind[1805]: Removed session 30.