May 12 23:59:16.750658 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:20:27 -00 2025 May 12 23:59:16.750676 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 12 23:59:16.750682 kernel: Disabled fast string operations May 12 23:59:16.750687 kernel: BIOS-provided physical RAM map: May 12 23:59:16.750690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 12 23:59:16.750695 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 12 23:59:16.750701 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 12 23:59:16.750705 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 12 23:59:16.750710 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 12 23:59:16.750714 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 12 23:59:16.750718 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 12 23:59:16.750723 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 12 23:59:16.750727 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 12 23:59:16.750731 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 12 23:59:16.750738 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 12 23:59:16.750743 kernel: NX (Execute Disable) protection: active May 12 23:59:16.750748 kernel: APIC: Static calls initialized May 12 23:59:16.750752 kernel: SMBIOS 2.7 present. May 12 23:59:16.750757 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 12 23:59:16.750762 kernel: vmware: hypercall mode: 0x00 May 12 23:59:16.750767 kernel: Hypervisor detected: VMware May 12 23:59:16.750772 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 12 23:59:16.750778 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 12 23:59:16.750783 kernel: vmware: using clock offset of 3287646808 ns May 12 23:59:16.750788 kernel: tsc: Detected 3408.000 MHz processor May 12 23:59:16.750793 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 12 23:59:16.750798 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 12 23:59:16.750803 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 12 23:59:16.750808 kernel: total RAM covered: 3072M May 12 23:59:16.750813 kernel: Found optimal setting for mtrr clean up May 12 23:59:16.750819 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 12 23:59:16.750824 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 12 23:59:16.750830 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 12 23:59:16.750835 kernel: Using GB pages for direct mapping May 12 23:59:16.750840 kernel: ACPI: Early table checksum verification disabled May 12 23:59:16.750845 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 12 23:59:16.750850 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 12 23:59:16.750855 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 12 23:59:16.750860 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 12 23:59:16.750872 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 12 23:59:16.750881 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 12 23:59:16.750887 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 12 23:59:16.750892 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 12 23:59:16.750898 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 12 23:59:16.750905 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 12 23:59:16.750912 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 12 23:59:16.750919 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 12 23:59:16.750924 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 12 23:59:16.750929 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 12 23:59:16.750934 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 12 23:59:16.750940 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 12 23:59:16.750945 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 12 23:59:16.750950 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 12 23:59:16.750955 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 12 23:59:16.750960 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 12 23:59:16.750966 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 12 23:59:16.750972 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 12 23:59:16.750977 kernel: system APIC only can use physical flat May 12 23:59:16.750982 kernel: APIC: Switched APIC routing to: physical flat May 12 23:59:16.750987 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 12 23:59:16.750992 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 12 23:59:16.750997 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 12 23:59:16.751002 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 12 23:59:16.751008 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 12 23:59:16.751013 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 12 23:59:16.751019 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 12 23:59:16.751024 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 12 23:59:16.751029 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 12 23:59:16.751034 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 12 23:59:16.751039 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 12 23:59:16.751044 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 12 23:59:16.751049 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 12 23:59:16.751054 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 12 23:59:16.751059 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 12 23:59:16.751064 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 12 23:59:16.751070 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 12 23:59:16.751075 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 12 23:59:16.751080 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 12 23:59:16.751085 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 12 23:59:16.751091 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 12 23:59:16.751096 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 12 23:59:16.751101 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 12 23:59:16.751106 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 12 23:59:16.751111 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 12 23:59:16.751116 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 12 23:59:16.751122 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 12 23:59:16.751127 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 12 23:59:16.751132 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 12 23:59:16.751137 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 12 23:59:16.751142 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 12 23:59:16.751147 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 12 23:59:16.751152 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 12 23:59:16.751157 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 12 23:59:16.751163 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 12 23:59:16.751168 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 12 23:59:16.751174 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 12 23:59:16.751179 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 12 23:59:16.751184 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 12 23:59:16.751189 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 12 23:59:16.751194 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 12 23:59:16.751199 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 12 23:59:16.751204 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 12 23:59:16.751209 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 12 23:59:16.751214 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 12 23:59:16.751219 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 12 23:59:16.751224 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 12 23:59:16.751231 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 12 23:59:16.751235 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 12 23:59:16.751241 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 12 23:59:16.751246 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 12 23:59:16.751251 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 12 23:59:16.751256 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 12 23:59:16.751261 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 12 23:59:16.751266 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 12 23:59:16.751271 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 12 23:59:16.751276 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 12 23:59:16.751282 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 12 23:59:16.751287 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 12 23:59:16.751296 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 12 23:59:16.751302 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 12 23:59:16.751308 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 12 23:59:16.751313 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 12 23:59:16.751319 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 12 23:59:16.751324 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 12 23:59:16.751330 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 12 23:59:16.751336 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 12 23:59:16.751341 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 12 23:59:16.751346 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 12 23:59:16.751352 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 12 23:59:16.751357 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 12 23:59:16.751363 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 12 23:59:16.751368 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 12 23:59:16.751373 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 12 23:59:16.751379 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 12 23:59:16.751385 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 12 23:59:16.751391 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 12 23:59:16.751396 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 12 23:59:16.751401 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 12 23:59:16.751407 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 12 23:59:16.751412 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 12 23:59:16.751417 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 12 23:59:16.751423 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 12 23:59:16.751428 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 12 23:59:16.751433 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 12 23:59:16.751440 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 12 23:59:16.751445 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 12 23:59:16.751451 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 12 23:59:16.751456 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 12 23:59:16.751461 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 12 23:59:16.751467 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 12 23:59:16.751472 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 12 23:59:16.751477 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 12 23:59:16.751483 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 12 23:59:16.751488 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 12 23:59:16.751494 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 12 23:59:16.751500 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 12 23:59:16.751505 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 12 23:59:16.751511 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 12 23:59:16.751516 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 12 23:59:16.751521 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 12 23:59:16.751527 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 12 23:59:16.751532 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 12 23:59:16.751538 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 12 23:59:16.751543 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 12 23:59:16.751548 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 12 23:59:16.751555 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 12 23:59:16.751560 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 12 23:59:16.751565 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 12 23:59:16.751571 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 12 23:59:16.751576 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 12 23:59:16.751582 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 12 23:59:16.751587 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 12 23:59:16.751592 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 12 23:59:16.751598 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 12 23:59:16.751603 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 12 23:59:16.751610 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 12 23:59:16.751615 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 12 23:59:16.751620 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 12 23:59:16.751626 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 12 23:59:16.751631 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 12 23:59:16.751637 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 12 23:59:16.751642 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 12 23:59:16.751647 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 12 23:59:16.751653 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 12 23:59:16.751658 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 12 23:59:16.751664 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 12 23:59:16.751670 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 12 23:59:16.751675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 12 23:59:16.751681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 12 23:59:16.751687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 12 23:59:16.751692 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 12 23:59:16.751698 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 12 23:59:16.751703 kernel: Zone ranges: May 12 23:59:16.751709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 12 23:59:16.751714 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 12 23:59:16.751721 kernel: Normal empty May 12 23:59:16.751726 kernel: Movable zone start for each node May 12 23:59:16.751732 kernel: Early memory node ranges May 12 23:59:16.751738 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 12 23:59:16.751743 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 12 23:59:16.751749 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 12 23:59:16.751754 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 12 23:59:16.751760 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 12 23:59:16.751765 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 12 23:59:16.751772 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 12 23:59:16.751777 kernel: ACPI: PM-Timer IO Port: 0x1008 May 12 23:59:16.751783 kernel: system APIC only can use physical flat May 12 23:59:16.751788 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 12 23:59:16.751794 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 12 23:59:16.751799 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 12 23:59:16.751805 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 12 23:59:16.751810 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 12 23:59:16.751815 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 12 23:59:16.751821 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 12 23:59:16.751827 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 12 23:59:16.751833 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 12 23:59:16.751838 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 12 23:59:16.751844 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 12 23:59:16.751849 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 12 23:59:16.751855 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 12 23:59:16.751860 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 12 23:59:16.753283 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 12 23:59:16.753290 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 12 23:59:16.753299 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 12 23:59:16.753305 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 12 23:59:16.753311 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 12 23:59:16.753316 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 12 23:59:16.753322 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 12 23:59:16.753327 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 12 23:59:16.753332 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 12 23:59:16.753338 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 12 23:59:16.753343 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 12 23:59:16.753349 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 12 23:59:16.753356 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 12 23:59:16.753362 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 12 23:59:16.753367 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 12 23:59:16.753372 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 12 23:59:16.753378 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 12 23:59:16.753383 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 12 23:59:16.753389 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 12 23:59:16.753394 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 12 23:59:16.753399 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 12 23:59:16.753405 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 12 23:59:16.753412 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 12 23:59:16.753417 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 12 23:59:16.753423 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 12 23:59:16.753428 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 12 23:59:16.753433 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 12 23:59:16.753439 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 12 23:59:16.753444 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 12 23:59:16.753450 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 12 23:59:16.753455 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 12 23:59:16.753462 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 12 23:59:16.753467 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 12 23:59:16.753473 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 12 23:59:16.753479 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 12 23:59:16.753484 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 12 23:59:16.753490 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 12 23:59:16.753495 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 12 23:59:16.753501 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 12 23:59:16.753506 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 12 23:59:16.753512 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 12 23:59:16.753518 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 12 23:59:16.753524 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 12 23:59:16.753530 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 12 23:59:16.753535 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 12 23:59:16.753541 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 12 23:59:16.753546 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 12 23:59:16.753552 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 12 23:59:16.753557 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 12 23:59:16.753563 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 12 23:59:16.753568 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 12 23:59:16.753579 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 12 23:59:16.753585 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 12 23:59:16.753590 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 12 23:59:16.753595 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 12 23:59:16.753601 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 12 23:59:16.753607 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 12 23:59:16.753612 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 12 23:59:16.753618 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 12 23:59:16.753623 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 12 23:59:16.753630 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 12 23:59:16.753635 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 12 23:59:16.753641 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 12 23:59:16.753646 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 12 23:59:16.753652 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 12 23:59:16.753657 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 12 23:59:16.753663 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 12 23:59:16.753668 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 12 23:59:16.753673 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 12 23:59:16.753679 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 12 23:59:16.753686 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 12 23:59:16.753691 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 12 23:59:16.753697 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 12 23:59:16.753702 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 12 23:59:16.753707 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 12 23:59:16.753713 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 12 23:59:16.753718 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 12 23:59:16.753724 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 12 23:59:16.753729 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 12 23:59:16.753734 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 12 23:59:16.753741 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 12 23:59:16.753747 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 12 23:59:16.753752 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 12 23:59:16.753757 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 12 23:59:16.753763 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 12 23:59:16.753768 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 12 23:59:16.753774 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 12 23:59:16.753779 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 12 23:59:16.753784 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 12 23:59:16.753791 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 12 23:59:16.753797 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 12 23:59:16.753802 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 12 23:59:16.753807 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 12 23:59:16.753813 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 12 23:59:16.753818 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 12 23:59:16.753824 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 12 23:59:16.753829 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 12 23:59:16.753834 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 12 23:59:16.753840 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 12 23:59:16.753847 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 12 23:59:16.753852 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 12 23:59:16.753857 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 12 23:59:16.753871 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 12 23:59:16.753877 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 12 23:59:16.753883 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 12 23:59:16.753888 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 12 23:59:16.753893 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 12 23:59:16.753899 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 12 23:59:16.753904 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 12 23:59:16.753911 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 12 23:59:16.753917 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 12 23:59:16.753922 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 12 23:59:16.753928 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 12 23:59:16.753933 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 12 23:59:16.753939 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 12 23:59:16.753944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 12 23:59:16.753950 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 12 23:59:16.753956 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 12 23:59:16.753962 kernel: TSC deadline timer available May 12 23:59:16.753968 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 12 23:59:16.753974 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 12 23:59:16.753980 kernel: Booting paravirtualized kernel on VMware hypervisor May 12 23:59:16.753986 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 12 23:59:16.753992 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 12 23:59:16.753997 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 12 23:59:16.754003 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 12 23:59:16.754009 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 12 23:59:16.754016 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 12 23:59:16.754021 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 12 23:59:16.754027 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 12 23:59:16.754032 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 12 23:59:16.754047 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 12 23:59:16.754054 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 12 23:59:16.754060 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 12 23:59:16.754066 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 12 23:59:16.754072 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 12 23:59:16.754079 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 12 23:59:16.754084 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 12 23:59:16.754090 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 12 23:59:16.754096 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 12 23:59:16.754101 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 12 23:59:16.754107 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 12 23:59:16.754114 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 12 23:59:16.754121 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 23:59:16.754128 kernel: random: crng init done May 12 23:59:16.754134 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 12 23:59:16.754140 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 12 23:59:16.754146 kernel: printk: log_buf_len min size: 262144 bytes May 12 23:59:16.754151 kernel: printk: log_buf_len: 1048576 bytes May 12 23:59:16.754157 kernel: printk: early log buf free: 239648(91%) May 12 23:59:16.754163 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 23:59:16.754169 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 12 23:59:16.754175 kernel: Fallback order for Node 0: 0 May 12 23:59:16.754182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 12 23:59:16.754188 kernel: Policy zone: DMA32 May 12 23:59:16.754194 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 23:59:16.754201 kernel: Memory: 1932224K/2096628K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 164144K reserved, 0K cma-reserved) May 12 23:59:16.754208 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 12 23:59:16.754214 kernel: ftrace: allocating 37993 entries in 149 pages May 12 23:59:16.754221 kernel: ftrace: allocated 149 pages with 4 groups May 12 23:59:16.754227 kernel: Dynamic Preempt: voluntary May 12 23:59:16.754233 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 23:59:16.754239 kernel: rcu: RCU event tracing is enabled. May 12 23:59:16.754246 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 12 23:59:16.754252 kernel: Trampoline variant of Tasks RCU enabled. May 12 23:59:16.754257 kernel: Rude variant of Tasks RCU enabled. May 12 23:59:16.754263 kernel: Tracing variant of Tasks RCU enabled. May 12 23:59:16.754269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 23:59:16.754277 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 12 23:59:16.754283 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 12 23:59:16.754289 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 12 23:59:16.754295 kernel: Console: colour VGA+ 80x25 May 12 23:59:16.754300 kernel: printk: console [tty0] enabled May 12 23:59:16.754306 kernel: printk: console [ttyS0] enabled May 12 23:59:16.754312 kernel: ACPI: Core revision 20230628 May 12 23:59:16.754318 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 12 23:59:16.754324 kernel: APIC: Switch to symmetric I/O mode setup May 12 23:59:16.754331 kernel: x2apic enabled May 12 23:59:16.754337 kernel: APIC: Switched APIC routing to: physical x2apic May 12 23:59:16.754343 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 12 23:59:16.754349 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 12 23:59:16.754355 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 12 23:59:16.754361 kernel: Disabled fast string operations May 12 23:59:16.754367 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 12 23:59:16.754373 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 12 23:59:16.754379 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 12 23:59:16.754386 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 12 23:59:16.754392 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 12 23:59:16.754398 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 12 23:59:16.754404 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 12 23:59:16.754410 kernel: RETBleed: Mitigation: Enhanced IBRS May 12 23:59:16.754416 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 12 23:59:16.754422 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 12 23:59:16.754428 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 12 23:59:16.754434 kernel: SRBDS: Unknown: Dependent on hypervisor status May 12 23:59:16.754441 kernel: GDS: Unknown: Dependent on hypervisor status May 12 23:59:16.754448 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 12 23:59:16.754454 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 12 23:59:16.754460 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 12 23:59:16.754466 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 12 23:59:16.754472 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 12 23:59:16.754478 kernel: Freeing SMP alternatives memory: 32K May 12 23:59:16.754484 kernel: pid_max: default: 131072 minimum: 1024 May 12 23:59:16.754490 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 23:59:16.754498 kernel: landlock: Up and running. May 12 23:59:16.754504 kernel: SELinux: Initializing. May 12 23:59:16.754510 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 12 23:59:16.754516 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 12 23:59:16.754522 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 12 23:59:16.754529 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:59:16.754535 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:59:16.754541 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:59:16.754547 kernel: Performance Events: Skylake events, core PMU driver. May 12 23:59:16.754555 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 12 23:59:16.754561 kernel: core: CPUID marked event: 'instructions' unavailable May 12 23:59:16.754567 kernel: core: CPUID marked event: 'bus cycles' unavailable May 12 23:59:16.754573 kernel: core: CPUID marked event: 'cache references' unavailable May 12 23:59:16.754578 kernel: core: CPUID marked event: 'cache misses' unavailable May 12 23:59:16.754588 kernel: core: CPUID marked event: 'branch instructions' unavailable May 12 23:59:16.754594 kernel: core: CPUID marked event: 'branch misses' unavailable May 12 23:59:16.754600 kernel: ... version: 1 May 12 23:59:16.754608 kernel: ... bit width: 48 May 12 23:59:16.754614 kernel: ... generic registers: 4 May 12 23:59:16.754620 kernel: ... value mask: 0000ffffffffffff May 12 23:59:16.754625 kernel: ... max period: 000000007fffffff May 12 23:59:16.754631 kernel: ... fixed-purpose events: 0 May 12 23:59:16.754637 kernel: ... event mask: 000000000000000f May 12 23:59:16.754643 kernel: signal: max sigframe size: 1776 May 12 23:59:16.754649 kernel: rcu: Hierarchical SRCU implementation. May 12 23:59:16.754655 kernel: rcu: Max phase no-delay instances is 400. May 12 23:59:16.754661 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 12 23:59:16.754668 kernel: smp: Bringing up secondary CPUs ... May 12 23:59:16.754674 kernel: smpboot: x86: Booting SMP configuration: May 12 23:59:16.754680 kernel: .... node #0, CPUs: #1 May 12 23:59:16.754686 kernel: Disabled fast string operations May 12 23:59:16.754692 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 12 23:59:16.754698 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 12 23:59:16.754703 kernel: smp: Brought up 1 node, 2 CPUs May 12 23:59:16.754709 kernel: smpboot: Max logical packages: 128 May 12 23:59:16.754716 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 12 23:59:16.754723 kernel: devtmpfs: initialized May 12 23:59:16.754729 kernel: x86/mm: Memory block size: 128MB May 12 23:59:16.754735 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 12 23:59:16.754741 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 23:59:16.754747 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 12 23:59:16.754753 kernel: pinctrl core: initialized pinctrl subsystem May 12 23:59:16.754759 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 23:59:16.754765 kernel: audit: initializing netlink subsys (disabled) May 12 23:59:16.754770 kernel: audit: type=2000 audit(1747094355.065:1): state=initialized audit_enabled=0 res=1 May 12 23:59:16.754778 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 23:59:16.754784 kernel: thermal_sys: Registered thermal governor 'user_space' May 12 23:59:16.754790 kernel: cpuidle: using governor menu May 12 23:59:16.754796 kernel: Simple Boot Flag at 0x36 set to 0x80 May 12 23:59:16.754802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 23:59:16.754808 kernel: dca service started, version 1.12.1 May 12 23:59:16.754815 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 12 23:59:16.754821 kernel: PCI: Using configuration type 1 for base access May 12 23:59:16.754827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 12 23:59:16.754834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 23:59:16.754840 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 12 23:59:16.754846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 23:59:16.754852 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 12 23:59:16.754858 kernel: ACPI: Added _OSI(Module Device) May 12 23:59:16.756901 kernel: ACPI: Added _OSI(Processor Device) May 12 23:59:16.756910 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 23:59:16.756916 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 23:59:16.756922 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 23:59:16.756932 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 12 23:59:16.756938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 12 23:59:16.756945 kernel: ACPI: Interpreter enabled May 12 23:59:16.756951 kernel: ACPI: PM: (supports S0 S1 S5) May 12 23:59:16.756956 kernel: ACPI: Using IOAPIC for interrupt routing May 12 23:59:16.756963 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 12 23:59:16.756968 kernel: PCI: Using E820 reservations for host bridge windows May 12 23:59:16.756974 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 12 23:59:16.756980 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 12 23:59:16.757101 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 23:59:16.757160 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 12 23:59:16.757212 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 12 23:59:16.757221 kernel: PCI host bridge to bus 0000:00 May 12 23:59:16.757274 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 12 23:59:16.757321 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 12 23:59:16.757369 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 12 23:59:16.757414 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 12 23:59:16.757459 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 12 23:59:16.757503 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 12 23:59:16.757564 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 12 23:59:16.757624 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 12 23:59:16.757680 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 12 23:59:16.757741 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 12 23:59:16.757792 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 12 23:59:16.757845 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 12 23:59:16.757915 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 12 23:59:16.757968 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 12 23:59:16.758019 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 12 23:59:16.758080 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 12 23:59:16.758134 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 12 23:59:16.758186 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 12 23:59:16.758242 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 12 23:59:16.758294 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 12 23:59:16.758345 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 12 23:59:16.758419 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 12 23:59:16.758483 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 12 23:59:16.758541 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 12 23:59:16.758595 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 12 23:59:16.758645 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 12 23:59:16.758708 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 12 23:59:16.758769 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 12 23:59:16.758829 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.760960 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 12 23:59:16.761043 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761102 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 12 23:59:16.761161 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761214 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 12 23:59:16.761270 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761329 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 12 23:59:16.761388 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761441 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 12 23:59:16.761502 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761559 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 12 23:59:16.761615 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761670 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 12 23:59:16.761726 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.761779 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 12 23:59:16.761835 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.762962 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 12 23:59:16.763026 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763081 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 12 23:59:16.763137 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763190 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 12 23:59:16.763246 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763298 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 12 23:59:16.763357 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763410 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 12 23:59:16.763469 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763521 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 12 23:59:16.763589 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763644 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 12 23:59:16.763700 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.763756 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 12 23:59:16.763812 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.764592 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 12 23:59:16.764696 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.764785 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 12 23:59:16.764848 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765239 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 12 23:59:16.765301 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765355 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 12 23:59:16.765413 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765467 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 12 23:59:16.765536 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765594 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 12 23:59:16.765651 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765704 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 12 23:59:16.765762 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.765814 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 12 23:59:16.765904 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.767597 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 12 23:59:16.767662 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.767719 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 12 23:59:16.767784 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.767841 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 12 23:59:16.767924 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.767983 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 12 23:59:16.768059 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.768129 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 12 23:59:16.768190 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.768244 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 12 23:59:16.768302 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.768381 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 12 23:59:16.768472 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 12 23:59:16.768569 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 12 23:59:16.768666 kernel: pci_bus 0000:01: extended config space not accessible May 12 23:59:16.768747 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 12 23:59:16.768827 kernel: pci_bus 0000:02: extended config space not accessible May 12 23:59:16.768843 kernel: acpiphp: Slot [32] registered May 12 23:59:16.768857 kernel: acpiphp: Slot [33] registered May 12 23:59:16.770929 kernel: acpiphp: Slot [34] registered May 12 23:59:16.770938 kernel: acpiphp: Slot [35] registered May 12 23:59:16.770945 kernel: acpiphp: Slot [36] registered May 12 23:59:16.770951 kernel: acpiphp: Slot [37] registered May 12 23:59:16.770957 kernel: acpiphp: Slot [38] registered May 12 23:59:16.770963 kernel: acpiphp: Slot [39] registered May 12 23:59:16.770969 kernel: acpiphp: Slot [40] registered May 12 23:59:16.770975 kernel: acpiphp: Slot [41] registered May 12 23:59:16.770981 kernel: acpiphp: Slot [42] registered May 12 23:59:16.770991 kernel: acpiphp: Slot [43] registered May 12 23:59:16.770997 kernel: acpiphp: Slot [44] registered May 12 23:59:16.771003 kernel: acpiphp: Slot [45] registered May 12 23:59:16.771009 kernel: acpiphp: Slot [46] registered May 12 23:59:16.771015 kernel: acpiphp: Slot [47] registered May 12 23:59:16.771021 kernel: acpiphp: Slot [48] registered May 12 23:59:16.771027 kernel: acpiphp: Slot [49] registered May 12 23:59:16.771033 kernel: acpiphp: Slot [50] registered May 12 23:59:16.771039 kernel: acpiphp: Slot [51] registered May 12 23:59:16.771046 kernel: acpiphp: Slot [52] registered May 12 23:59:16.771052 kernel: acpiphp: Slot [53] registered May 12 23:59:16.771058 kernel: acpiphp: Slot [54] registered May 12 23:59:16.771064 kernel: acpiphp: Slot [55] registered May 12 23:59:16.771070 kernel: acpiphp: Slot [56] registered May 12 23:59:16.771078 kernel: acpiphp: Slot [57] registered May 12 23:59:16.771088 kernel: acpiphp: Slot [58] registered May 12 23:59:16.771098 kernel: acpiphp: Slot [59] registered May 12 23:59:16.771109 kernel: acpiphp: Slot [60] registered May 12 23:59:16.771118 kernel: acpiphp: Slot [61] registered May 12 23:59:16.771129 kernel: acpiphp: Slot [62] registered May 12 23:59:16.771136 kernel: acpiphp: Slot [63] registered May 12 23:59:16.771215 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 12 23:59:16.771272 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 12 23:59:16.771326 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 12 23:59:16.771378 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:59:16.771430 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 12 23:59:16.771487 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 12 23:59:16.771571 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 12 23:59:16.771626 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 12 23:59:16.771677 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 12 23:59:16.771740 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 12 23:59:16.771794 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 12 23:59:16.771870 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 12 23:59:16.771955 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 12 23:59:16.772039 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 12 23:59:16.772121 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 12 23:59:16.772212 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 12 23:59:16.772299 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 12 23:59:16.772383 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 12 23:59:16.772466 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 12 23:59:16.772548 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 12 23:59:16.772631 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 12 23:59:16.772710 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:59:16.772794 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 12 23:59:16.774943 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 12 23:59:16.775068 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 12 23:59:16.775157 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:59:16.775245 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 12 23:59:16.775330 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 12 23:59:16.775417 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:59:16.775501 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 12 23:59:16.775583 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 12 23:59:16.775669 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:59:16.775759 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 12 23:59:16.775842 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 12 23:59:16.776470 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:59:16.776536 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 12 23:59:16.776598 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 12 23:59:16.776652 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:59:16.776708 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 12 23:59:16.776763 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 12 23:59:16.776821 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:59:16.776961 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 12 23:59:16.777019 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 12 23:59:16.777073 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 12 23:59:16.777126 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 12 23:59:16.777179 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 12 23:59:16.777232 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 12 23:59:16.777319 kernel: pci 0000:0b:00.0: supports D1 D2 May 12 23:59:16.777378 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 12 23:59:16.777432 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 12 23:59:16.777484 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 12 23:59:16.777537 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 12 23:59:16.777588 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 12 23:59:16.777642 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 12 23:59:16.777694 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 12 23:59:16.777747 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 12 23:59:16.777801 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:59:16.777854 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 12 23:59:16.777961 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 12 23:59:16.778013 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 12 23:59:16.778064 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:59:16.778118 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 12 23:59:16.778170 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 12 23:59:16.778224 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:59:16.778278 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 12 23:59:16.778331 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 12 23:59:16.778383 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:59:16.778438 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 12 23:59:16.778503 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 12 23:59:16.778563 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:59:16.778709 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 12 23:59:16.778767 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 12 23:59:16.778821 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:59:16.778887 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 12 23:59:16.778945 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 12 23:59:16.778998 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:59:16.779055 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 12 23:59:16.779110 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 12 23:59:16.779164 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 12 23:59:16.779582 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:59:16.779643 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 12 23:59:16.779699 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 12 23:59:16.779751 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 12 23:59:16.779804 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:59:16.779991 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 12 23:59:16.780050 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 12 23:59:16.780103 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 12 23:59:16.780160 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:59:16.780215 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 12 23:59:16.780267 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 12 23:59:16.780319 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:59:16.780372 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 12 23:59:16.780425 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 12 23:59:16.780476 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:59:16.780542 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 12 23:59:16.780606 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 12 23:59:16.780658 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:59:16.780711 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 12 23:59:16.780763 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 12 23:59:16.780814 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:59:16.780902 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 12 23:59:16.780957 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 12 23:59:16.781011 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:59:16.781064 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 12 23:59:16.781115 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 12 23:59:16.781180 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 12 23:59:16.781233 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:59:16.781286 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 12 23:59:16.781338 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 12 23:59:16.781388 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 12 23:59:16.781443 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:59:16.781498 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 12 23:59:16.781550 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 12 23:59:16.781601 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:59:16.781655 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 12 23:59:16.781706 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 12 23:59:16.781757 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:59:16.781810 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 12 23:59:16.781872 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 12 23:59:16.781924 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:59:16.781979 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 12 23:59:16.782030 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 12 23:59:16.782081 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:59:16.782135 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 12 23:59:16.782187 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 12 23:59:16.782239 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:59:16.782295 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 12 23:59:16.782347 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 12 23:59:16.782399 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:59:16.782408 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 12 23:59:16.782415 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 12 23:59:16.782421 kernel: ACPI: PCI: Interrupt link LNKB disabled May 12 23:59:16.782427 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 12 23:59:16.782433 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 12 23:59:16.782441 kernel: iommu: Default domain type: Translated May 12 23:59:16.782447 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 12 23:59:16.782453 kernel: PCI: Using ACPI for IRQ routing May 12 23:59:16.782459 kernel: PCI: pci_cache_line_size set to 64 bytes May 12 23:59:16.782465 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 12 23:59:16.782471 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 12 23:59:16.782523 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 12 23:59:16.782575 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 12 23:59:16.782625 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 12 23:59:16.782636 kernel: vgaarb: loaded May 12 23:59:16.782642 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 12 23:59:16.782649 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 12 23:59:16.782655 kernel: clocksource: Switched to clocksource tsc-early May 12 23:59:16.782660 kernel: VFS: Disk quotas dquot_6.6.0 May 12 23:59:16.782666 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 23:59:16.782672 kernel: pnp: PnP ACPI init May 12 23:59:16.782728 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 12 23:59:16.782777 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 12 23:59:16.782828 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 12 23:59:16.782890 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 12 23:59:16.782943 kernel: pnp 00:06: [dma 2] May 12 23:59:16.782999 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 12 23:59:16.783047 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 12 23:59:16.783094 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 12 23:59:16.783105 kernel: pnp: PnP ACPI: found 8 devices May 12 23:59:16.783111 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 12 23:59:16.783117 kernel: NET: Registered PF_INET protocol family May 12 23:59:16.783123 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 23:59:16.783130 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 12 23:59:16.783136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 23:59:16.783142 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 12 23:59:16.783148 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 12 23:59:16.783154 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 12 23:59:16.783161 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 12 23:59:16.783167 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 12 23:59:16.783173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 23:59:16.783179 kernel: NET: Registered PF_XDP protocol family May 12 23:59:16.783233 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 12 23:59:16.783288 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 12 23:59:16.783343 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 12 23:59:16.783400 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 12 23:59:16.783455 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 12 23:59:16.783510 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 12 23:59:16.783564 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 12 23:59:16.783623 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 12 23:59:16.783677 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 12 23:59:16.783735 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 12 23:59:16.783789 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 12 23:59:16.783843 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 12 23:59:16.784051 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 12 23:59:16.784107 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 12 23:59:16.784160 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 12 23:59:16.784216 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 12 23:59:16.784268 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 12 23:59:16.784320 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 12 23:59:16.784373 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 12 23:59:16.784426 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 12 23:59:16.784478 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 12 23:59:16.784533 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 12 23:59:16.784585 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 12 23:59:16.784639 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:59:16.784692 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:59:16.784745 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.784819 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.784885 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.784941 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.784993 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785045 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785097 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785148 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785199 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785250 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785302 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785358 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785410 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785461 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785513 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785564 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785616 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785667 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785719 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785775 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.785827 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.785999 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786056 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786109 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786161 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786212 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786264 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786319 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786372 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786423 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786475 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786527 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786582 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786636 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786687 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786742 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.786794 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.786846 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787068 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787123 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787177 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787250 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787304 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787359 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787410 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787462 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787512 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787563 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787614 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787665 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787716 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787766 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.787819 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 12 23:59:16.787927 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.788210 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.788273 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.789940 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790004 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790060 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790114 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790168 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790220 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790278 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790330 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790383 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790434 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790486 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790538 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790603 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790657 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790711 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.790766 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.790818 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.791623 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.791690 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 12 23:59:16.791746 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.791801 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 12 23:59:16.791854 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.791924 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 12 23:59:16.791977 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.792030 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 12 23:59:16.792085 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.792148 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 12 23:59:16.792209 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 12 23:59:16.792506 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 12 23:59:16.792572 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 12 23:59:16.792626 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 12 23:59:16.792694 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 12 23:59:16.792774 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:59:16.793090 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 12 23:59:16.793152 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 12 23:59:16.793206 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 12 23:59:16.793260 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 12 23:59:16.793312 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:59:16.793382 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 12 23:59:16.793663 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 12 23:59:16.793721 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 12 23:59:16.793795 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:59:16.794163 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 12 23:59:16.794228 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 12 23:59:16.794284 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 12 23:59:16.794336 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:59:16.794389 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 12 23:59:16.794440 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 12 23:59:16.794492 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:59:16.794543 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 12 23:59:16.794595 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 12 23:59:16.794645 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:59:16.794702 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 12 23:59:16.794752 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 12 23:59:16.794817 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:59:16.794897 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 12 23:59:16.794951 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 12 23:59:16.795003 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:59:16.795058 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 12 23:59:16.795111 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 12 23:59:16.795162 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:59:16.795227 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 12 23:59:16.795283 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 12 23:59:16.795335 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 12 23:59:16.795385 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 12 23:59:16.795437 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:59:16.795489 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 12 23:59:16.795545 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 12 23:59:16.795596 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 12 23:59:16.795647 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:59:16.795700 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 12 23:59:16.795752 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 12 23:59:16.795803 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 12 23:59:16.795854 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:59:16.796054 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 12 23:59:16.796108 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 12 23:59:16.796163 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:59:16.796217 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 12 23:59:16.796268 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 12 23:59:16.796320 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:59:16.796374 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 12 23:59:16.796638 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 12 23:59:16.796695 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:59:16.796750 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 12 23:59:16.796802 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 12 23:59:16.796855 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:59:16.796927 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 12 23:59:16.796980 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 12 23:59:16.798922 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:59:16.798989 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 12 23:59:16.799044 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 12 23:59:16.799097 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 12 23:59:16.799149 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:59:16.799203 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 12 23:59:16.799255 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 12 23:59:16.799312 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 12 23:59:16.799363 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:59:16.799417 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 12 23:59:16.799469 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 12 23:59:16.799521 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 12 23:59:16.799593 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:59:16.799657 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 12 23:59:16.799710 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 12 23:59:16.799763 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:59:16.799817 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 12 23:59:16.799883 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 12 23:59:16.799936 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:59:16.799990 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 12 23:59:16.800042 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 12 23:59:16.800093 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:59:16.800147 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 12 23:59:16.800199 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 12 23:59:16.800250 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:59:16.800303 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 12 23:59:16.800357 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 12 23:59:16.800409 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:59:16.800463 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 12 23:59:16.800514 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 12 23:59:16.800565 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 12 23:59:16.800632 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:59:16.800686 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 12 23:59:16.800738 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 12 23:59:16.800790 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 12 23:59:16.800845 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:59:16.800906 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 12 23:59:16.800959 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 12 23:59:16.801010 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:59:16.801062 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 12 23:59:16.801115 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 12 23:59:16.801166 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:59:16.801220 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 12 23:59:16.801272 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 12 23:59:16.801329 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:59:16.801396 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 12 23:59:16.801448 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 12 23:59:16.801500 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:59:16.801553 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 12 23:59:16.801605 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 12 23:59:16.801656 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:59:16.801710 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 12 23:59:16.801762 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 12 23:59:16.801813 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:59:16.802216 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 12 23:59:16.802279 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 12 23:59:16.802327 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 12 23:59:16.802373 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 12 23:59:16.802418 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 12 23:59:16.802468 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 12 23:59:16.802516 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 12 23:59:16.802563 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:59:16.802613 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 12 23:59:16.802659 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 12 23:59:16.802706 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 12 23:59:16.802751 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 12 23:59:16.802797 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 12 23:59:16.802849 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 12 23:59:16.802909 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 12 23:59:16.802961 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:59:16.803743 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 12 23:59:16.803799 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 12 23:59:16.803848 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:59:16.803939 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 12 23:59:16.803989 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 12 23:59:16.804036 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:59:16.804091 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 12 23:59:16.804138 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:59:16.804193 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 12 23:59:16.804241 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:59:16.804292 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 12 23:59:16.804340 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:59:16.804394 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 12 23:59:16.804441 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:59:16.804492 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 12 23:59:16.804540 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:59:16.804602 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 12 23:59:16.804649 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 12 23:59:16.804698 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:59:16.804750 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 12 23:59:16.804798 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 12 23:59:16.804845 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:59:16.804903 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 12 23:59:16.804952 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 12 23:59:16.805002 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:59:16.805064 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 12 23:59:16.805116 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:59:16.805170 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 12 23:59:16.805218 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:59:16.805270 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 12 23:59:16.805317 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:59:16.805372 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 12 23:59:16.805420 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:59:16.805471 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 12 23:59:16.805519 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:59:16.805569 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 12 23:59:16.805617 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 12 23:59:16.805667 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:59:16.805718 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 12 23:59:16.805766 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 12 23:59:16.805813 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:59:16.805913 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 12 23:59:16.805967 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 12 23:59:16.806014 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:59:16.806069 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 12 23:59:16.806116 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:59:16.806170 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 12 23:59:16.806218 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:59:16.806270 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 12 23:59:16.806318 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:59:16.806371 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 12 23:59:16.806418 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:59:16.806469 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 12 23:59:16.806516 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:59:16.806566 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 12 23:59:16.806619 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 12 23:59:16.806669 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:59:16.806720 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 12 23:59:16.806768 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 12 23:59:16.806814 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:59:16.806872 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 12 23:59:16.806922 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:59:16.806976 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 12 23:59:16.807026 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:59:16.807077 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 12 23:59:16.807124 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:59:16.807177 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 12 23:59:16.807225 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:59:16.807277 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 12 23:59:16.807326 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:59:16.807377 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 12 23:59:16.807426 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:59:16.807490 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 12 23:59:16.807506 kernel: PCI: CLS 32 bytes, default 64 May 12 23:59:16.807517 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 12 23:59:16.807527 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 12 23:59:16.807540 kernel: clocksource: Switched to clocksource tsc May 12 23:59:16.807546 kernel: Initialise system trusted keyrings May 12 23:59:16.807553 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 12 23:59:16.807559 kernel: Key type asymmetric registered May 12 23:59:16.807565 kernel: Asymmetric key parser 'x509' registered May 12 23:59:16.807571 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 12 23:59:16.807577 kernel: io scheduler mq-deadline registered May 12 23:59:16.807583 kernel: io scheduler kyber registered May 12 23:59:16.807590 kernel: io scheduler bfq registered May 12 23:59:16.807651 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 12 23:59:16.807706 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.807761 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 12 23:59:16.807815 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.807878 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 12 23:59:16.807933 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.807988 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 12 23:59:16.808044 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808099 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 12 23:59:16.808151 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808204 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 12 23:59:16.808257 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808313 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 12 23:59:16.808366 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808419 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 12 23:59:16.808473 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808525 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 12 23:59:16.808578 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808634 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 12 23:59:16.808687 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808739 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 12 23:59:16.808791 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.808844 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 12 23:59:16.808997 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809052 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 12 23:59:16.809107 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809161 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 12 23:59:16.809213 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809267 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 12 23:59:16.809318 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809374 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 12 23:59:16.809427 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809479 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 12 23:59:16.809531 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809587 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 12 23:59:16.809641 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809697 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 12 23:59:16.809748 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809802 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 12 23:59:16.809854 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.809920 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 12 23:59:16.809974 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810030 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 12 23:59:16.810082 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810135 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 12 23:59:16.810190 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810243 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 12 23:59:16.810299 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810351 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 12 23:59:16.810403 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810456 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 12 23:59:16.810507 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810559 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 12 23:59:16.810626 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810684 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 12 23:59:16.810737 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810790 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 12 23:59:16.810842 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.810908 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 12 23:59:16.810967 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.811021 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 12 23:59:16.811073 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.811126 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 12 23:59:16.811179 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:59:16.811191 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 12 23:59:16.811197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 23:59:16.811204 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 12 23:59:16.811210 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 12 23:59:16.811216 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 12 23:59:16.811223 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 12 23:59:16.811276 kernel: rtc_cmos 00:01: registered as rtc0 May 12 23:59:16.811326 kernel: rtc_cmos 00:01: setting system clock to 2025-05-12T23:59:16 UTC (1747094356) May 12 23:59:16.811377 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 12 23:59:16.811386 kernel: intel_pstate: CPU model not supported May 12 23:59:16.811393 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 12 23:59:16.811404 kernel: NET: Registered PF_INET6 protocol family May 12 23:59:16.811414 kernel: Segment Routing with IPv6 May 12 23:59:16.811424 kernel: In-situ OAM (IOAM) with IPv6 May 12 23:59:16.811431 kernel: NET: Registered PF_PACKET protocol family May 12 23:59:16.811437 kernel: Key type dns_resolver registered May 12 23:59:16.811444 kernel: IPI shorthand broadcast: enabled May 12 23:59:16.811452 kernel: sched_clock: Marking stable (899358105, 225036637)->(1186523729, -62128987) May 12 23:59:16.811458 kernel: registered taskstats version 1 May 12 23:59:16.811465 kernel: Loading compiled-in X.509 certificates May 12 23:59:16.811471 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 72bf95fdb9aed340290dd5f38e76c1ea0e6f32b4' May 12 23:59:16.811477 kernel: Key type .fscrypt registered May 12 23:59:16.811483 kernel: Key type fscrypt-provisioning registered May 12 23:59:16.811490 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 23:59:16.811496 kernel: ima: Allocated hash algorithm: sha1 May 12 23:59:16.811502 kernel: ima: No architecture policies found May 12 23:59:16.811510 kernel: clk: Disabling unused clocks May 12 23:59:16.811516 kernel: Freeing unused kernel image (initmem) memory: 43604K May 12 23:59:16.811523 kernel: Write protecting the kernel read-only data: 40960k May 12 23:59:16.811529 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 12 23:59:16.811535 kernel: Run /init as init process May 12 23:59:16.811541 kernel: with arguments: May 12 23:59:16.811548 kernel: /init May 12 23:59:16.811554 kernel: with environment: May 12 23:59:16.811560 kernel: HOME=/ May 12 23:59:16.811567 kernel: TERM=linux May 12 23:59:16.811573 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 23:59:16.811580 systemd[1]: Successfully made /usr/ read-only. May 12 23:59:16.811589 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 23:59:16.811597 systemd[1]: Detected virtualization vmware. May 12 23:59:16.811603 systemd[1]: Detected architecture x86-64. May 12 23:59:16.811609 systemd[1]: Running in initrd. May 12 23:59:16.811615 systemd[1]: No hostname configured, using default hostname. May 12 23:59:16.811624 systemd[1]: Hostname set to . May 12 23:59:16.811630 systemd[1]: Initializing machine ID from random generator. May 12 23:59:16.811636 systemd[1]: Queued start job for default target initrd.target. May 12 23:59:16.811643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:59:16.811649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:59:16.811656 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 23:59:16.811663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:59:16.811670 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 23:59:16.811678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 23:59:16.811686 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 23:59:16.811692 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 23:59:16.811699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:59:16.811705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:59:16.811712 systemd[1]: Reached target paths.target - Path Units. May 12 23:59:16.811719 systemd[1]: Reached target slices.target - Slice Units. May 12 23:59:16.811726 systemd[1]: Reached target swap.target - Swaps. May 12 23:59:16.811733 systemd[1]: Reached target timers.target - Timer Units. May 12 23:59:16.811740 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:59:16.811746 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:59:16.811752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 23:59:16.811759 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 23:59:16.811766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:59:16.811772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:59:16.811780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:59:16.811786 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:59:16.811793 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 23:59:16.811800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:59:16.811806 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 23:59:16.811812 systemd[1]: Starting systemd-fsck-usr.service... May 12 23:59:16.811819 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:59:16.811825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:59:16.811832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:59:16.811839 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 23:59:16.811846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:59:16.811853 systemd[1]: Finished systemd-fsck-usr.service. May 12 23:59:16.811860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:59:16.812163 systemd-journald[217]: Collecting audit messages is disabled. May 12 23:59:16.812181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:59:16.812189 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:59:16.812196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 23:59:16.812209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:59:16.812220 kernel: Bridge firewalling registered May 12 23:59:16.812231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:59:16.812238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:59:16.812245 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:59:16.812252 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:59:16.812258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:59:16.812265 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 23:59:16.812274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:59:16.812282 systemd-journald[217]: Journal started May 12 23:59:16.812297 systemd-journald[217]: Runtime Journal (/run/log/journal/91d3f09cae364c28a5928929c06f1f4e) is 4.8M, max 38.6M, 33.7M free. May 12 23:59:16.754610 systemd-modules-load[218]: Inserted module 'overlay' May 12 23:59:16.781884 systemd-modules-load[218]: Inserted module 'br_netfilter' May 12 23:59:16.814101 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:59:16.814646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:59:16.818500 dracut-cmdline[239]: dracut-dracut-053 May 12 23:59:16.822184 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 12 23:59:16.825801 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:59:16.826803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:59:16.852596 systemd-resolved[273]: Positive Trust Anchors: May 12 23:59:16.852608 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:59:16.852630 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:59:16.855000 systemd-resolved[273]: Defaulting to hostname 'linux'. May 12 23:59:16.855900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:59:16.856060 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:59:16.868878 kernel: SCSI subsystem initialized May 12 23:59:16.874880 kernel: Loading iSCSI transport class v2.0-870. May 12 23:59:16.882883 kernel: iscsi: registered transport (tcp) May 12 23:59:16.896153 kernel: iscsi: registered transport (qla4xxx) May 12 23:59:16.896201 kernel: QLogic iSCSI HBA Driver May 12 23:59:16.915821 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 23:59:16.916815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 23:59:16.940220 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 23:59:16.940256 kernel: device-mapper: uevent: version 1.0.3 May 12 23:59:16.941394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 23:59:16.972875 kernel: raid6: avx2x4 gen() 47443 MB/s May 12 23:59:16.988874 kernel: raid6: avx2x2 gen() 52949 MB/s May 12 23:59:17.006075 kernel: raid6: avx2x1 gen() 44976 MB/s May 12 23:59:17.006123 kernel: raid6: using algorithm avx2x2 gen() 52949 MB/s May 12 23:59:17.024088 kernel: raid6: .... xor() 32113 MB/s, rmw enabled May 12 23:59:17.024132 kernel: raid6: using avx2x2 recovery algorithm May 12 23:59:17.037878 kernel: xor: automatically using best checksumming function avx May 12 23:59:17.128048 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 23:59:17.133314 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 23:59:17.134271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:59:17.150851 systemd-udevd[436]: Using default interface naming scheme 'v255'. May 12 23:59:17.153744 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:59:17.156000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 23:59:17.167247 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation May 12 23:59:17.181737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:59:17.182769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:59:17.261900 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:59:17.265334 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 23:59:17.285047 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 23:59:17.285855 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:59:17.286839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:59:17.287350 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:59:17.288390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 23:59:17.302377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 23:59:17.326924 kernel: libata version 3.00 loaded. May 12 23:59:17.339211 kernel: ata_piix 0000:00:07.1: version 2.13 May 12 23:59:17.339359 kernel: scsi host0: ata_piix May 12 23:59:17.345875 kernel: scsi host1: ata_piix May 12 23:59:17.350137 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 12 23:59:17.350155 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 12 23:59:17.350164 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 12 23:59:17.350172 kernel: vmw_pvscsi: using 64bit dma May 12 23:59:17.355373 kernel: vmw_pvscsi: max_id: 16 May 12 23:59:17.355390 kernel: vmw_pvscsi: setting ring_pages to 8 May 12 23:59:17.359549 kernel: vmw_pvscsi: enabling reqCallThreshold May 12 23:59:17.359567 kernel: vmw_pvscsi: driver-based request coalescing enabled May 12 23:59:17.359575 kernel: vmw_pvscsi: using MSI-X May 12 23:59:17.359583 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 12 23:59:17.359590 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 12 23:59:17.361986 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 12 23:59:17.362098 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 May 12 23:59:17.362182 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 12 23:59:17.364407 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 12 23:59:17.376878 kernel: cryptd: max_cpu_qlen set to 1000 May 12 23:59:17.381663 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:59:17.381928 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:59:17.382258 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:59:17.382509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:59:17.382713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:59:17.383022 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:59:17.383694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:59:17.403997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:59:17.404948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:59:17.429411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:59:17.516944 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 12 23:59:17.522899 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 12 23:59:17.527880 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 12 23:59:17.534873 kernel: AVX2 version of gcm_enc/dec engaged. May 12 23:59:17.537906 kernel: AES CTR mode by8 optimization enabled May 12 23:59:17.545988 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 12 23:59:17.546094 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 12 23:59:17.547771 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 12 23:59:17.549148 kernel: sd 2:0:0:0: [sda] Write Protect is off May 12 23:59:17.549235 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 May 12 23:59:17.549302 kernel: sd 2:0:0:0: [sda] Cache data unavailable May 12 23:59:17.549366 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through May 12 23:59:17.558883 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 12 23:59:17.581886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:59:17.582884 kernel: sd 2:0:0:0: [sda] Attached SCSI disk May 12 23:59:17.617312 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (489) May 12 23:59:17.621896 kernel: BTRFS: device fsid d5ab0fb8-9c4f-4805-8fe7-b120550325cd devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (494) May 12 23:59:17.628992 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 12 23:59:17.634822 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 12 23:59:17.640492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 12 23:59:17.644977 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 12 23:59:17.645126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 12 23:59:17.645838 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 23:59:17.683252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:59:17.689882 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:59:18.734891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:59:18.735763 disk-uuid[596]: The operation has completed successfully. May 12 23:59:19.072944 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 23:59:19.073008 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 23:59:19.073878 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 23:59:19.091043 sh[612]: Success May 12 23:59:19.113882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 12 23:59:19.387212 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 23:59:19.390920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 23:59:19.400050 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 23:59:19.414882 kernel: BTRFS info (device dm-0): first mount of filesystem d5ab0fb8-9c4f-4805-8fe7-b120550325cd May 12 23:59:19.414920 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 12 23:59:19.414929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 23:59:19.415638 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 23:59:19.417226 kernel: BTRFS info (device dm-0): using free space tree May 12 23:59:19.423879 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 12 23:59:19.425759 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 23:59:19.426611 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 12 23:59:19.427816 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 23:59:19.530877 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 12 23:59:19.533880 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:59:19.533908 kernel: BTRFS info (device sda6): using free space tree May 12 23:59:19.537880 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:59:19.542885 kernel: BTRFS info (device sda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 12 23:59:19.549703 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 23:59:19.550958 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 23:59:19.585105 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 12 23:59:19.585995 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 23:59:19.639030 ignition[668]: Ignition 2.20.0 May 12 23:59:19.639042 ignition[668]: Stage: fetch-offline May 12 23:59:19.639064 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 12 23:59:19.639069 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:19.639123 ignition[668]: parsed url from cmdline: "" May 12 23:59:19.639124 ignition[668]: no config URL provided May 12 23:59:19.639127 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 12 23:59:19.639132 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 12 23:59:19.639507 ignition[668]: config successfully fetched May 12 23:59:19.639524 ignition[668]: parsing config with SHA512: 052f5ee26d51085fa97a4d63146a0bfb6fbe53aa10d132c40903a72a02ea73701edd63a7a4cbf45286d0ca5e3b1e1bb0113713d03e3d55fc50bf83721338c167 May 12 23:59:19.642011 unknown[668]: fetched base config from "system" May 12 23:59:19.642020 unknown[668]: fetched user config from "vmware" May 12 23:59:19.642247 ignition[668]: fetch-offline: fetch-offline passed May 12 23:59:19.643015 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:59:19.642288 ignition[668]: Ignition finished successfully May 12 23:59:19.662960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:59:19.663934 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:59:19.693137 systemd-networkd[801]: lo: Link UP May 12 23:59:19.693371 systemd-networkd[801]: lo: Gained carrier May 12 23:59:19.694298 systemd-networkd[801]: Enumeration completed May 12 23:59:19.694465 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:59:19.694619 systemd[1]: Reached target network.target - Network. May 12 23:59:19.694781 systemd-networkd[801]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 12 23:59:19.696052 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 23:59:19.698844 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 12 23:59:19.698962 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 12 23:59:19.697941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 23:59:19.699455 systemd-networkd[801]: ens192: Link UP May 12 23:59:19.699577 systemd-networkd[801]: ens192: Gained carrier May 12 23:59:19.723191 ignition[804]: Ignition 2.20.0 May 12 23:59:19.723199 ignition[804]: Stage: kargs May 12 23:59:19.723301 ignition[804]: no configs at "/usr/lib/ignition/base.d" May 12 23:59:19.723308 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:19.723850 ignition[804]: kargs: kargs passed May 12 23:59:19.723891 ignition[804]: Ignition finished successfully May 12 23:59:19.725416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 23:59:19.726320 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 23:59:19.744888 ignition[812]: Ignition 2.20.0 May 12 23:59:19.744899 ignition[812]: Stage: disks May 12 23:59:19.745037 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 12 23:59:19.745046 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:19.745639 ignition[812]: disks: disks passed May 12 23:59:19.745666 ignition[812]: Ignition finished successfully May 12 23:59:19.746494 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 23:59:19.746894 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 23:59:19.747017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 23:59:19.747147 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:59:19.747253 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:59:19.747357 systemd[1]: Reached target basic.target - Basic System. May 12 23:59:19.748979 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 23:59:19.805109 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 12 23:59:19.808370 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 23:59:19.809413 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 23:59:19.891680 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 23:59:19.891937 kernel: EXT4-fs (sda9): mounted filesystem c9958eea-1ed5-48cc-be53-8e1c8ef051da r/w with ordered data mode. Quota mode: none. May 12 23:59:19.892103 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 23:59:19.893196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:59:19.894911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 23:59:19.895345 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 23:59:19.895375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 23:59:19.895390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:59:19.902090 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 23:59:19.903092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 23:59:19.946885 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (828) May 12 23:59:19.958951 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 12 23:59:19.958981 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:59:19.958993 kernel: BTRFS info (device sda6): using free space tree May 12 23:59:19.974901 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:59:19.977169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:59:19.996685 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory May 12 23:59:20.000335 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory May 12 23:59:20.003744 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory May 12 23:59:20.006378 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory May 12 23:59:20.078809 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 23:59:20.079919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 23:59:20.082396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 23:59:20.089874 kernel: BTRFS info (device sda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 12 23:59:20.109227 ignition[941]: INFO : Ignition 2.20.0 May 12 23:59:20.109227 ignition[941]: INFO : Stage: mount May 12 23:59:20.109581 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:59:20.109581 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:20.109837 ignition[941]: INFO : mount: mount passed May 12 23:59:20.110517 ignition[941]: INFO : Ignition finished successfully May 12 23:59:20.110758 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 23:59:20.111496 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 23:59:20.153633 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 23:59:20.412952 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 23:59:20.414229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:59:20.443905 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (952) May 12 23:59:20.448428 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 12 23:59:20.448466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:59:20.448481 kernel: BTRFS info (device sda6): using free space tree May 12 23:59:20.454301 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:59:20.454172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:59:20.475007 ignition[968]: INFO : Ignition 2.20.0 May 12 23:59:20.475007 ignition[968]: INFO : Stage: files May 12 23:59:20.475403 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:59:20.475403 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:20.475808 ignition[968]: DEBUG : files: compiled without relabeling support, skipping May 12 23:59:20.476280 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 23:59:20.476280 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 23:59:20.478357 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 23:59:20.478544 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 23:59:20.478698 unknown[968]: wrote ssh authorized keys file for user: core May 12 23:59:20.478936 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 23:59:20.480848 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 12 23:59:20.481046 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 12 23:59:20.575247 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 23:59:20.762482 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 12 23:59:20.762482 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 23:59:20.762482 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 12 23:59:20.921159 systemd-networkd[801]: ens192: Gained IPv6LL May 12 23:59:21.337240 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 12 23:59:21.409131 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 23:59:21.409131 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:59:21.409602 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:59:21.410908 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:59:21.410908 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 12 23:59:21.410908 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 12 23:59:21.410908 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 12 23:59:21.410908 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 12 23:59:21.831250 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 12 23:59:22.085134 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 12 23:59:22.085134 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 12 23:59:22.085134 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 12 23:59:22.085134 ignition[968]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 12 23:59:22.085134 ignition[968]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 12 23:59:22.110368 ignition[968]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:59:22.112878 ignition[968]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:59:22.112878 ignition[968]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 12 23:59:22.112878 ignition[968]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 12 23:59:22.112878 ignition[968]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 12 23:59:22.113427 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 23:59:22.113427 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 23:59:22.113427 ignition[968]: INFO : files: files passed May 12 23:59:22.113427 ignition[968]: INFO : Ignition finished successfully May 12 23:59:22.113574 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 23:59:22.114561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 23:59:22.115922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 23:59:22.126971 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 23:59:22.127164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 23:59:22.130400 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:59:22.130400 initrd-setup-root-after-ignition[1001]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 23:59:22.131043 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:59:22.131771 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:59:22.132217 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 23:59:22.132933 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 23:59:22.157857 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 23:59:22.157948 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 23:59:22.158337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 23:59:22.158457 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 23:59:22.158658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 23:59:22.159128 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 23:59:22.169406 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:59:22.170229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 23:59:22.180419 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 23:59:22.180610 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:59:22.180854 systemd[1]: Stopped target timers.target - Timer Units. May 12 23:59:22.181059 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 23:59:22.181124 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:59:22.181483 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 23:59:22.181647 systemd[1]: Stopped target basic.target - Basic System. May 12 23:59:22.181806 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 23:59:22.182162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:59:22.182360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 23:59:22.182567 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 23:59:22.182754 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:59:22.182979 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 23:59:22.183183 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 23:59:22.183364 systemd[1]: Stopped target swap.target - Swaps. May 12 23:59:22.183513 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 23:59:22.183578 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 23:59:22.183827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 23:59:22.184073 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:59:22.184247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 23:59:22.184291 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:59:22.184460 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 23:59:22.184520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 23:59:22.184787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 23:59:22.184851 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:59:22.185076 systemd[1]: Stopped target paths.target - Path Units. May 12 23:59:22.185195 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 23:59:22.189891 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:59:22.190067 systemd[1]: Stopped target slices.target - Slice Units. May 12 23:59:22.190294 systemd[1]: Stopped target sockets.target - Socket Units. May 12 23:59:22.190499 systemd[1]: iscsid.socket: Deactivated successfully. May 12 23:59:22.190551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:59:22.190709 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 23:59:22.190756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:59:22.190934 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 23:59:22.190999 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:59:22.191236 systemd[1]: ignition-files.service: Deactivated successfully. May 12 23:59:22.191297 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 23:59:22.192984 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 23:59:22.194984 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 23:59:22.195105 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 23:59:22.195181 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:59:22.195347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 23:59:22.195412 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:59:22.200893 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 23:59:22.200950 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 23:59:22.206745 ignition[1025]: INFO : Ignition 2.20.0 May 12 23:59:22.206745 ignition[1025]: INFO : Stage: umount May 12 23:59:22.207797 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:59:22.207797 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:59:22.207797 ignition[1025]: INFO : umount: umount passed May 12 23:59:22.207797 ignition[1025]: INFO : Ignition finished successfully May 12 23:59:22.208092 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 23:59:22.208167 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 23:59:22.209953 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 23:59:22.210221 systemd[1]: Stopped target network.target - Network. May 12 23:59:22.210309 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 23:59:22.210335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 23:59:22.210439 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 23:59:22.210462 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 23:59:22.210560 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 23:59:22.210587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 23:59:22.210684 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 23:59:22.210704 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 23:59:22.211596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 23:59:22.211780 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 23:59:22.213216 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 23:59:22.213282 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 23:59:22.214407 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 23:59:22.214541 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 23:59:22.214564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:59:22.215360 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 23:59:22.217629 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 23:59:22.217693 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 23:59:22.218400 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 23:59:22.218494 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 23:59:22.218511 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 23:59:22.219181 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 23:59:22.219276 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 23:59:22.219302 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:59:22.219427 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 12 23:59:22.219448 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 12 23:59:22.219567 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 23:59:22.219588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 23:59:22.219740 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 23:59:22.219761 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 23:59:22.220160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:59:22.221493 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 23:59:22.233345 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 23:59:22.233587 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:59:22.234026 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 23:59:22.234210 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 23:59:22.234651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 23:59:22.234791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 23:59:22.234974 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 23:59:22.234992 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:59:22.235602 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 23:59:22.235629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 23:59:22.235785 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 23:59:22.235808 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 23:59:22.235943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:59:22.235964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:59:22.237185 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 23:59:22.237293 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 23:59:22.237320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:59:22.237725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 12 23:59:22.237748 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:59:22.238053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 23:59:22.238076 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:59:22.238475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:59:22.238497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:59:22.242859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 23:59:22.242940 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 23:59:22.316959 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 23:59:22.317044 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 23:59:22.317401 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 23:59:22.317530 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 23:59:22.317566 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 23:59:22.318367 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 23:59:22.330969 systemd[1]: Switching root. May 12 23:59:22.350087 systemd-journald[217]: Journal stopped May 12 23:59:23.997599 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 12 23:59:23.997628 kernel: SELinux: policy capability network_peer_controls=1 May 12 23:59:23.997636 kernel: SELinux: policy capability open_perms=1 May 12 23:59:23.997642 kernel: SELinux: policy capability extended_socket_class=1 May 12 23:59:23.997648 kernel: SELinux: policy capability always_check_network=0 May 12 23:59:23.997653 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 23:59:23.997661 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 23:59:23.997667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 23:59:23.997673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 23:59:23.997678 kernel: audit: type=1403 audit(1747094363.440:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 23:59:23.997685 systemd[1]: Successfully loaded SELinux policy in 34.451ms. May 12 23:59:23.997692 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.728ms. May 12 23:59:23.997699 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 23:59:23.997707 systemd[1]: Detected virtualization vmware. May 12 23:59:23.997714 systemd[1]: Detected architecture x86-64. May 12 23:59:23.997720 systemd[1]: Detected first boot. May 12 23:59:23.997727 systemd[1]: Initializing machine ID from random generator. May 12 23:59:23.997735 zram_generator::config[1070]: No configuration found. May 12 23:59:23.997828 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 12 23:59:23.997840 kernel: Guest personality initialized and is active May 12 23:59:23.997846 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 12 23:59:23.997852 kernel: Initialized host personality May 12 23:59:23.997858 kernel: NET: Registered PF_VSOCK protocol family May 12 23:59:23.997871 systemd[1]: Populated /etc with preset unit settings. May 12 23:59:23.997882 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:59:23.997889 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 12 23:59:23.997897 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 23:59:23.997903 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 23:59:23.997910 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 23:59:23.997916 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 23:59:23.997923 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 23:59:23.997931 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 23:59:23.997938 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 23:59:23.997945 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 23:59:23.997952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 23:59:23.997958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 23:59:23.997966 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 23:59:23.997972 systemd[1]: Created slice user.slice - User and Session Slice. May 12 23:59:23.997979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:59:23.997988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:59:23.997996 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 23:59:23.998003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 23:59:23.998010 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 23:59:23.998017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:59:23.998024 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 12 23:59:23.998030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:59:23.998039 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 23:59:23.998046 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 23:59:23.998053 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 23:59:23.998060 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 23:59:23.998067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:59:23.998074 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:59:23.998081 systemd[1]: Reached target slices.target - Slice Units. May 12 23:59:23.998088 systemd[1]: Reached target swap.target - Swaps. May 12 23:59:23.998094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 23:59:23.998102 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 23:59:23.998110 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 23:59:23.998117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:59:23.998124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:59:23.998132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:59:23.998139 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 23:59:23.998146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 23:59:23.998153 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 23:59:23.998159 systemd[1]: Mounting media.mount - External Media Directory... May 12 23:59:23.998166 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:59:23.998173 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 23:59:23.998180 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 23:59:23.998188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 23:59:23.998196 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 23:59:23.998203 systemd[1]: Reached target machines.target - Containers. May 12 23:59:23.998210 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 23:59:23.998217 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 12 23:59:23.998223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:59:23.998230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 23:59:23.998237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:59:23.998245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:59:23.998253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:59:23.998260 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 23:59:23.998267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:59:23.998274 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 23:59:23.998281 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 23:59:23.998288 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 23:59:23.998295 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 23:59:23.998302 systemd[1]: Stopped systemd-fsck-usr.service. May 12 23:59:23.998310 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:59:23.998317 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:59:23.998324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:59:23.998331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 23:59:23.998338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 23:59:23.998345 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 23:59:23.998351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:59:23.998359 systemd[1]: verity-setup.service: Deactivated successfully. May 12 23:59:23.998367 systemd[1]: Stopped verity-setup.service. May 12 23:59:23.998375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:59:23.998382 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 23:59:23.998389 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 23:59:23.998395 systemd[1]: Mounted media.mount - External Media Directory. May 12 23:59:23.998402 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 23:59:23.998409 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 23:59:23.998416 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 23:59:23.998422 kernel: loop: module loaded May 12 23:59:23.998430 kernel: fuse: init (API version 7.39) May 12 23:59:23.998437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:59:23.998444 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 23:59:23.998451 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 23:59:23.998458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:59:23.998465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:59:23.998472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:59:23.998479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:59:23.998487 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 23:59:23.998494 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 23:59:23.998502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 23:59:23.998509 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:59:23.998516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:59:23.998522 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:59:23.998529 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 23:59:23.998536 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 23:59:23.998544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 23:59:23.998555 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 23:59:23.998581 systemd-journald[1174]: Collecting audit messages is disabled. May 12 23:59:23.998603 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 23:59:23.998612 systemd-journald[1174]: Journal started May 12 23:59:23.998629 systemd-journald[1174]: Runtime Journal (/run/log/journal/e1a32715a33c4896812fd99a3eb8b92a) is 4.8M, max 38.6M, 33.7M free. May 12 23:59:24.016011 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 23:59:24.016045 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 23:59:24.016057 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:59:24.016066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 23:59:24.016074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 23:59:24.016083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 23:59:24.016095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:59:23.802766 systemd[1]: Queued start job for default target multi-user.target. May 12 23:59:23.810939 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 12 23:59:23.811202 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 23:59:24.016639 jq[1140]: true May 12 23:59:24.017155 jq[1186]: true May 12 23:59:24.021978 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 23:59:24.030874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:59:24.031872 kernel: ACPI: bus type drm_connector registered May 12 23:59:24.035888 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 23:59:24.037989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:59:24.046925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:59:24.054877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 23:59:24.061931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:59:24.068437 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:59:24.067646 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:59:24.067766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:59:24.067956 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 23:59:24.068351 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 23:59:24.072317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 23:59:24.077084 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 23:59:24.087251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 23:59:24.089965 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 23:59:24.092075 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 23:59:24.114036 kernel: loop0: detected capacity change from 0 to 109808 May 12 23:59:24.131123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:59:24.131455 systemd-journald[1174]: Time spent on flushing to /var/log/journal/e1a32715a33c4896812fd99a3eb8b92a is 20.613ms for 1857 entries. May 12 23:59:24.131455 systemd-journald[1174]: System Journal (/var/log/journal/e1a32715a33c4896812fd99a3eb8b92a) is 8M, max 584.8M, 576.8M free. May 12 23:59:24.164839 systemd-journald[1174]: Received client request to flush runtime journal. May 12 23:59:24.136894 ignition[1188]: Ignition 2.20.0 May 12 23:59:24.147484 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 12 23:59:24.137074 ignition[1188]: deleting config from guestinfo properties May 12 23:59:24.142137 ignition[1188]: Successfully deleted config May 12 23:59:24.165844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 23:59:24.166155 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 23:59:24.167773 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 12 23:59:24.167911 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 12 23:59:24.170513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:59:24.177596 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 12 23:59:24.178030 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:59:24.179013 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 23:59:24.186943 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 23:59:24.201054 udevadm[1241]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 12 23:59:24.205916 kernel: loop1: detected capacity change from 0 to 151640 May 12 23:59:24.223390 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 23:59:24.227001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:59:24.247296 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 12 23:59:24.247894 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 12 23:59:24.251902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:59:24.255891 kernel: loop2: detected capacity change from 0 to 2960 May 12 23:59:24.318949 kernel: loop3: detected capacity change from 0 to 218376 May 12 23:59:24.352256 kernel: loop4: detected capacity change from 0 to 109808 May 12 23:59:24.379882 kernel: loop5: detected capacity change from 0 to 151640 May 12 23:59:24.407885 kernel: loop6: detected capacity change from 0 to 2960 May 12 23:59:24.427880 kernel: loop7: detected capacity change from 0 to 218376 May 12 23:59:24.443563 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 12 23:59:24.443920 (sd-merge)[1252]: Merged extensions into '/usr'. May 12 23:59:24.448271 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... May 12 23:59:24.448503 systemd[1]: Reloading... May 12 23:59:24.522979 zram_generator::config[1278]: No configuration found. May 12 23:59:24.624624 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:59:24.643601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:59:24.685829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 23:59:24.686097 systemd[1]: Reloading finished in 236 ms. May 12 23:59:24.699001 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 23:59:24.699384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 23:59:24.708939 systemd[1]: Starting ensure-sysext.service... May 12 23:59:24.710341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:59:24.718003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:59:24.731932 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... May 12 23:59:24.731941 systemd[1]: Reloading... May 12 23:59:24.735358 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 23:59:24.735537 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 23:59:24.736043 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 23:59:24.736199 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 12 23:59:24.736236 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 12 23:59:24.740193 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:59:24.740198 systemd-tmpfiles[1339]: Skipping /boot May 12 23:59:24.745574 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 23:59:24.749664 systemd-udevd[1340]: Using default interface naming scheme 'v255'. May 12 23:59:24.751450 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:59:24.751973 systemd-tmpfiles[1339]: Skipping /boot May 12 23:59:24.784876 zram_generator::config[1373]: No configuration found. May 12 23:59:24.890888 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 12 23:59:24.895959 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1389) May 12 23:59:24.902486 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:59:24.921912 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 12 23:59:24.930143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:59:24.933265 kernel: ACPI: button: Power Button [PWRF] May 12 23:59:24.996742 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 12 23:59:24.997044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 12 23:59:24.997585 systemd[1]: Reloading finished in 265 ms. May 12 23:59:25.004349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:59:25.005325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 23:59:25.010768 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 12 23:59:25.011769 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:59:25.029110 (udev-worker)[1382]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 12 23:59:25.040914 kernel: mousedev: PS/2 mouse device common for all mice May 12 23:59:25.055855 systemd[1]: Finished ensure-sysext.service. May 12 23:59:25.058073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:59:25.058952 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:59:25.060986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 23:59:25.067904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:59:25.068693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:59:25.071397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:59:25.072964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:59:25.073151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:59:25.074769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 23:59:25.074904 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:59:25.080943 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 23:59:25.085015 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:59:25.090045 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:59:25.092973 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 23:59:25.094627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 23:59:25.100378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:59:25.100512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:59:25.101353 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 12 23:59:25.101625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:59:25.101827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:59:25.102068 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:59:25.102489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:59:25.102727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:59:25.102827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:59:25.103179 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:59:25.103350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:59:25.106642 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 23:59:25.113023 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 12 23:59:25.113167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:59:25.113203 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:59:25.117954 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 23:59:25.127702 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 23:59:25.129390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 23:59:25.131595 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 23:59:25.138194 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:59:25.143209 augenrules[1505]: No rules May 12 23:59:25.144456 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:59:25.144911 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:59:25.148173 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 23:59:25.148393 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 23:59:25.154707 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 23:59:25.163912 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 23:59:25.166247 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 12 23:59:25.166446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:59:25.167471 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 12 23:59:25.180321 lvm[1519]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:59:25.202122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:59:25.209237 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 12 23:59:25.237450 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 23:59:25.237711 systemd[1]: Reached target time-set.target - System Time Set. May 12 23:59:25.256801 systemd-resolved[1474]: Positive Trust Anchors: May 12 23:59:25.256809 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:59:25.256833 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:59:25.259741 systemd-networkd[1472]: lo: Link UP May 12 23:59:25.259873 systemd-networkd[1472]: lo: Gained carrier May 12 23:59:25.260728 systemd-networkd[1472]: Enumeration completed May 12 23:59:25.260813 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:59:25.261160 systemd-networkd[1472]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 12 23:59:25.263878 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 12 23:59:25.263993 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 12 23:59:25.264330 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 23:59:25.266512 systemd-resolved[1474]: Defaulting to hostname 'linux'. May 12 23:59:25.266996 systemd-networkd[1472]: ens192: Link UP May 12 23:59:25.267135 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 23:59:25.267298 systemd-networkd[1472]: ens192: Gained carrier May 12 23:59:25.268075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:59:25.268225 systemd[1]: Reached target network.target - Network. May 12 23:59:25.268311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:59:25.268427 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:59:25.268562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 23:59:25.268680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 23:59:25.269281 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 23:59:25.269358 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. May 12 23:59:25.269444 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 23:59:25.269549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 23:59:25.269651 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 23:59:25.269664 systemd[1]: Reached target paths.target - Path Units. May 12 23:59:25.269744 systemd[1]: Reached target timers.target - Timer Units. May 12 23:59:25.276036 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 23:59:25.277053 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 23:59:25.278388 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 23:59:25.278624 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 23:59:25.278735 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 23:59:25.280012 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 23:59:25.280296 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 23:59:25.280811 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 23:59:25.280953 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:59:25.281039 systemd[1]: Reached target basic.target - Basic System. May 12 23:59:25.281147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 23:59:25.281165 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 23:59:25.281805 systemd[1]: Starting containerd.service - containerd container runtime... May 12 23:59:25.284798 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 23:59:25.285631 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 23:59:25.286993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 23:59:25.287093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 23:59:25.289565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 23:59:25.291278 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 23:59:25.295756 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 23:59:25.298609 jq[1534]: false May 12 23:59:25.299101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 23:59:25.306068 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 23:59:25.307024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 23:59:25.307526 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 23:59:25.308930 systemd[1]: Starting update-engine.service - Update Engine... May 12 23:59:25.310964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 23:59:25.313440 dbus-daemon[1533]: [system] SELinux support is enabled May 12 23:59:25.314943 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 12 23:59:25.315522 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 23:59:25.317811 jq[1542]: true May 12 23:59:25.320959 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 23:59:25.323024 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 23:59:25.323141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 23:59:25.328095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 23:59:25.328325 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 23:59:25.333463 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 23:59:25.334117 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 23:59:25.334263 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 23:59:25.334291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 23:59:25.342083 update_engine[1541]: I20250512 23:59:25.341606 1541 main.cc:92] Flatcar Update Engine starting May 12 23:59:25.344944 systemd[1]: motdgen.service: Deactivated successfully. May 12 23:59:25.346072 extend-filesystems[1535]: Found loop4 May 12 23:59:25.346072 extend-filesystems[1535]: Found loop5 May 12 23:59:25.346072 extend-filesystems[1535]: Found loop6 May 12 23:59:25.346072 extend-filesystems[1535]: Found loop7 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda May 12 23:59:25.346072 extend-filesystems[1535]: Found sda1 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda2 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda3 May 12 23:59:25.346072 extend-filesystems[1535]: Found usr May 12 23:59:25.346072 extend-filesystems[1535]: Found sda4 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda6 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda7 May 12 23:59:25.346072 extend-filesystems[1535]: Found sda9 May 12 23:59:25.346072 extend-filesystems[1535]: Checking size of /dev/sda9 May 12 23:59:25.345534 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 23:59:25.352313 jq[1555]: true May 12 23:59:25.352364 update_engine[1541]: I20250512 23:59:25.347250 1541 update_check_scheduler.cc:74] Next update check in 10m23s May 12 23:59:25.349184 systemd[1]: Started update-engine.service - Update Engine. May 12 23:59:25.353056 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 23:59:25.353799 tar[1549]: linux-amd64/LICENSE May 12 23:59:25.353799 tar[1549]: linux-amd64/helm May 12 23:59:25.358159 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 23:59:25.366305 extend-filesystems[1535]: Old size kept for /dev/sda9 May 12 23:59:25.366305 extend-filesystems[1535]: Found sr0 May 12 23:59:25.367748 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 12 23:59:25.368018 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 23:59:25.368441 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 23:59:25.374913 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 12 23:59:25.404935 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 12 23:59:25.444876 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1386) May 12 23:59:25.448768 systemd-logind[1540]: Watching system buttons on /dev/input/event1 (Power Button) May 12 23:59:25.448950 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 12 23:59:25.449219 systemd-logind[1540]: New seat seat0. May 12 23:59:25.449359 unknown[1574]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 12 23:59:25.450025 systemd[1]: Started systemd-logind.service - User Login Management. May 12 23:59:25.451441 unknown[1574]: Core dump limit set to -1 May 12 23:59:25.471920 bash[1595]: Updated "/home/core/.ssh/authorized_keys" May 12 23:59:25.475992 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 23:59:25.478667 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 23:59:25.579555 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 23:59:25.617119 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 23:59:25.629298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 23:59:25.633109 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 23:59:25.647498 systemd[1]: issuegen.service: Deactivated successfully. May 12 23:59:25.648025 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 23:59:25.650037 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 23:59:25.652151 containerd[1563]: time="2025-05-12T23:59:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 12 23:59:25.652467 containerd[1563]: time="2025-05-12T23:59:25.652451273Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 12 23:59:25.660879 containerd[1563]: time="2025-05-12T23:59:25.660414754Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="3.987µs" May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.662882070Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.662901705Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.662986136Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.662996169Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.663010454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.663044711Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 23:59:25.663184 containerd[1563]: time="2025-05-12T23:59:25.663052965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663525838Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663536612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663543958Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663548734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663604533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663721735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663738706Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663745885Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663765522Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663906587Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 12 23:59:25.664020 containerd[1563]: time="2025-05-12T23:59:25.663939475Z" level=info msg="metadata content store policy set" policy=shared May 12 23:59:25.665885 containerd[1563]: time="2025-05-12T23:59:25.665858693Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 12 23:59:25.665946 containerd[1563]: time="2025-05-12T23:59:25.665937368Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 12 23:59:25.666010 containerd[1563]: time="2025-05-12T23:59:25.665983513Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 12 23:59:25.666054 containerd[1563]: time="2025-05-12T23:59:25.666046492Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 12 23:59:25.666086 containerd[1563]: time="2025-05-12T23:59:25.666079879Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 12 23:59:25.666126 containerd[1563]: time="2025-05-12T23:59:25.666114279Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 12 23:59:25.666169 containerd[1563]: time="2025-05-12T23:59:25.666161037Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 12 23:59:25.666202 containerd[1563]: time="2025-05-12T23:59:25.666195846Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 12 23:59:25.666237 containerd[1563]: time="2025-05-12T23:59:25.666229973Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 12 23:59:25.666273 containerd[1563]: time="2025-05-12T23:59:25.666265923Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 12 23:59:25.666303 containerd[1563]: time="2025-05-12T23:59:25.666296671Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 12 23:59:25.666334 containerd[1563]: time="2025-05-12T23:59:25.666327987Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 12 23:59:25.666414 containerd[1563]: time="2025-05-12T23:59:25.666405585Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 12 23:59:25.666452 containerd[1563]: time="2025-05-12T23:59:25.666445525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 12 23:59:25.666484 containerd[1563]: time="2025-05-12T23:59:25.666478021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 12 23:59:25.666521 containerd[1563]: time="2025-05-12T23:59:25.666513443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 12 23:59:25.666559 containerd[1563]: time="2025-05-12T23:59:25.666551946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666583207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666596378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666606724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666614060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666630440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666637548Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666680002Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 12 23:59:25.666702 containerd[1563]: time="2025-05-12T23:59:25.666689003Z" level=info msg="Start snapshots syncer" May 12 23:59:25.666870 containerd[1563]: time="2025-05-12T23:59:25.666823303Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 12 23:59:25.667097 containerd[1563]: time="2025-05-12T23:59:25.667031323Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 12 23:59:25.667097 containerd[1563]: time="2025-05-12T23:59:25.667068946Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667208771Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667282244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667297688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667312799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667323978Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667331773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667337631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667343982Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667358564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667368453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 12 23:59:25.667398 containerd[1563]: time="2025-05-12T23:59:25.667374740Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667560833Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667576255Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667582350Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667587565Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667591914Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 12 23:59:25.667611 containerd[1563]: time="2025-05-12T23:59:25.667597836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667603502Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667734644Z" level=info msg="runtime interface created" May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667738891Z" level=info msg="created NRI interface" May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667743812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667750426Z" level=info msg="Connect containerd service" May 12 23:59:25.667849 containerd[1563]: time="2025-05-12T23:59:25.667770527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 23:59:25.668490 containerd[1563]: time="2025-05-12T23:59:25.668325007Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 23:59:25.672729 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 23:59:25.676140 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 23:59:25.679425 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 12 23:59:25.679603 systemd[1]: Reached target getty.target - Login Prompts. May 12 23:59:25.767285 containerd[1563]: time="2025-05-12T23:59:25.767254506Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 23:59:25.767352 containerd[1563]: time="2025-05-12T23:59:25.767299579Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 23:59:25.767352 containerd[1563]: time="2025-05-12T23:59:25.767321452Z" level=info msg="Start subscribing containerd event" May 12 23:59:25.767352 containerd[1563]: time="2025-05-12T23:59:25.767342648Z" level=info msg="Start recovering state" May 12 23:59:25.767414 containerd[1563]: time="2025-05-12T23:59:25.767402125Z" level=info msg="Start event monitor" May 12 23:59:25.767441 containerd[1563]: time="2025-05-12T23:59:25.767415979Z" level=info msg="Start cni network conf syncer for default" May 12 23:59:25.767441 containerd[1563]: time="2025-05-12T23:59:25.767421823Z" level=info msg="Start streaming server" May 12 23:59:25.767441 containerd[1563]: time="2025-05-12T23:59:25.767430287Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 12 23:59:25.767441 containerd[1563]: time="2025-05-12T23:59:25.767437877Z" level=info msg="runtime interface starting up..." May 12 23:59:25.767441 containerd[1563]: time="2025-05-12T23:59:25.767441364Z" level=info msg="starting plugins..." May 12 23:59:25.767503 containerd[1563]: time="2025-05-12T23:59:25.767448920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 12 23:59:25.767658 containerd[1563]: time="2025-05-12T23:59:25.767647221Z" level=info msg="containerd successfully booted in 0.115988s" May 12 23:59:25.767872 systemd[1]: Started containerd.service - containerd container runtime. May 12 23:59:25.866911 tar[1549]: linux-amd64/README.md May 12 23:59:25.881190 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 23:59:26.809006 systemd-networkd[1472]: ens192: Gained IPv6LL May 12 23:59:26.809411 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. May 12 23:59:26.810775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 23:59:26.811188 systemd[1]: Reached target network-online.target - Network is Online. May 12 23:59:26.812441 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 12 23:59:26.813995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:59:26.816152 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 23:59:26.843176 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 23:59:26.864237 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 23:59:26.864529 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 12 23:59:26.865365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 23:59:27.620817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:59:27.621301 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 23:59:27.621819 systemd[1]: Startup finished in 984ms (kernel) + 6.808s (initrd) + 4.214s (userspace) = 12.006s. May 12 23:59:27.629099 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:59:27.655969 login[1653]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 12 23:59:27.656121 login[1649]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 12 23:59:27.665539 systemd-logind[1540]: New session 2 of user core. May 12 23:59:27.666435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 23:59:27.667370 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 23:59:27.685485 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 23:59:27.687178 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 23:59:27.697361 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 23:59:27.699027 systemd-logind[1540]: New session c1 of user core. May 12 23:59:27.798048 systemd[1728]: Queued start job for default target default.target. May 12 23:59:27.804512 systemd[1728]: Created slice app.slice - User Application Slice. May 12 23:59:27.804529 systemd[1728]: Reached target paths.target - Paths. May 12 23:59:27.804554 systemd[1728]: Reached target timers.target - Timers. May 12 23:59:27.805388 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 23:59:27.814466 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 23:59:27.814504 systemd[1728]: Reached target sockets.target - Sockets. May 12 23:59:27.814528 systemd[1728]: Reached target basic.target - Basic System. May 12 23:59:27.814549 systemd[1728]: Reached target default.target - Main User Target. May 12 23:59:27.814572 systemd[1728]: Startup finished in 110ms. May 12 23:59:27.814819 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 23:59:27.820977 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 23:59:28.098299 kubelet[1721]: E0512 23:59:28.098222 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:59:28.099975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:59:28.100061 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:59:28.100285 systemd[1]: kubelet.service: Consumed 599ms CPU time, 253M memory peak. May 12 23:59:28.316054 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. May 12 23:59:28.657214 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 12 23:59:28.660722 systemd-logind[1540]: New session 1 of user core. May 12 23:59:28.666951 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 23:59:38.170504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 23:59:38.172222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:59:38.272812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:59:38.275411 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:59:38.305164 kubelet[1771]: E0512 23:59:38.305128 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:59:38.307211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:59:38.307291 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:59:38.307674 systemd[1]: kubelet.service: Consumed 90ms CPU time, 103M memory peak. May 12 23:59:48.420459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 23:59:48.421929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:59:48.495953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:59:48.501121 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:59:48.551887 kubelet[1786]: E0512 23:59:48.551827 1786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:59:48.553222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:59:48.553299 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:59:48.553694 systemd[1]: kubelet.service: Consumed 90ms CPU time, 106.2M memory peak. May 12 23:59:55.579879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 23:59:55.581222 systemd[1]: Started sshd@0-139.178.70.99:22-147.75.109.163:45454.service - OpenSSH per-connection server daemon (147.75.109.163:45454). May 12 23:59:55.623770 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 45454 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:55.624494 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:55.627859 systemd-logind[1540]: New session 3 of user core. May 12 23:59:55.631055 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 23:59:55.687293 systemd[1]: Started sshd@1-139.178.70.99:22-147.75.109.163:45464.service - OpenSSH per-connection server daemon (147.75.109.163:45464). May 12 23:59:55.729679 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 45464 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:55.730482 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:55.733899 systemd-logind[1540]: New session 4 of user core. May 12 23:59:55.740015 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 23:59:55.790057 sshd[1801]: Connection closed by 147.75.109.163 port 45464 May 12 23:59:55.790417 sshd-session[1799]: pam_unix(sshd:session): session closed for user core May 12 23:59:55.801828 systemd[1]: sshd@1-139.178.70.99:22-147.75.109.163:45464.service: Deactivated successfully. May 12 23:59:55.802971 systemd[1]: session-4.scope: Deactivated successfully. May 12 23:59:55.803544 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. May 12 23:59:55.805501 systemd[1]: Started sshd@2-139.178.70.99:22-147.75.109.163:45466.service - OpenSSH per-connection server daemon (147.75.109.163:45466). May 12 23:59:55.806196 systemd-logind[1540]: Removed session 4. May 12 23:59:55.841330 sshd[1806]: Accepted publickey for core from 147.75.109.163 port 45466 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:55.842022 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:55.844520 systemd-logind[1540]: New session 5 of user core. May 12 23:59:55.854042 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 23:59:55.899406 sshd[1809]: Connection closed by 147.75.109.163 port 45466 May 12 23:59:55.899838 sshd-session[1806]: pam_unix(sshd:session): session closed for user core May 12 23:59:55.909425 systemd[1]: sshd@2-139.178.70.99:22-147.75.109.163:45466.service: Deactivated successfully. May 12 23:59:55.910472 systemd[1]: session-5.scope: Deactivated successfully. May 12 23:59:55.911444 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. May 12 23:59:55.912369 systemd[1]: Started sshd@3-139.178.70.99:22-147.75.109.163:45472.service - OpenSSH per-connection server daemon (147.75.109.163:45472). May 12 23:59:55.912958 systemd-logind[1540]: Removed session 5. May 12 23:59:55.952694 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 45472 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:55.953741 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:55.957519 systemd-logind[1540]: New session 6 of user core. May 12 23:59:55.964014 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 23:59:56.013348 sshd[1817]: Connection closed by 147.75.109.163 port 45472 May 12 23:59:56.013272 sshd-session[1814]: pam_unix(sshd:session): session closed for user core May 12 23:59:56.022431 systemd[1]: sshd@3-139.178.70.99:22-147.75.109.163:45472.service: Deactivated successfully. May 12 23:59:56.023510 systemd[1]: session-6.scope: Deactivated successfully. May 12 23:59:56.024040 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. May 12 23:59:56.025513 systemd[1]: Started sshd@4-139.178.70.99:22-147.75.109.163:45488.service - OpenSSH per-connection server daemon (147.75.109.163:45488). May 12 23:59:56.026244 systemd-logind[1540]: Removed session 6. May 12 23:59:56.057290 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 45488 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:56.058056 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:56.061847 systemd-logind[1540]: New session 7 of user core. May 12 23:59:56.071007 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 23:59:56.127665 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 23:59:56.127838 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:59:56.142501 sudo[1826]: pam_unix(sudo:session): session closed for user root May 12 23:59:56.143369 sshd[1825]: Connection closed by 147.75.109.163 port 45488 May 12 23:59:56.144377 sshd-session[1822]: pam_unix(sshd:session): session closed for user core May 12 23:59:56.153459 systemd[1]: sshd@4-139.178.70.99:22-147.75.109.163:45488.service: Deactivated successfully. May 12 23:59:56.154458 systemd[1]: session-7.scope: Deactivated successfully. May 12 23:59:56.155475 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. May 12 23:59:56.156461 systemd[1]: Started sshd@5-139.178.70.99:22-147.75.109.163:45504.service - OpenSSH per-connection server daemon (147.75.109.163:45504). May 12 23:59:56.159105 systemd-logind[1540]: Removed session 7. May 12 23:59:56.193071 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 45504 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:56.193757 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:56.197406 systemd-logind[1540]: New session 8 of user core. May 12 23:59:56.203979 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 23:59:56.254192 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 23:59:56.254391 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:59:56.256777 sudo[1836]: pam_unix(sudo:session): session closed for user root May 12 23:59:56.260676 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 23:59:56.261060 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:59:56.268186 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:59:56.299398 augenrules[1858]: No rules May 12 23:59:56.299733 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:59:56.299962 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:59:56.301125 sudo[1835]: pam_unix(sudo:session): session closed for user root May 12 23:59:56.302604 sshd[1834]: Connection closed by 147.75.109.163 port 45504 May 12 23:59:56.302969 sshd-session[1831]: pam_unix(sshd:session): session closed for user core May 12 23:59:56.307687 systemd[1]: sshd@5-139.178.70.99:22-147.75.109.163:45504.service: Deactivated successfully. May 12 23:59:56.308919 systemd[1]: session-8.scope: Deactivated successfully. May 12 23:59:56.309424 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. May 12 23:59:56.310826 systemd[1]: Started sshd@6-139.178.70.99:22-147.75.109.163:45508.service - OpenSSH per-connection server daemon (147.75.109.163:45508). May 12 23:59:56.312119 systemd-logind[1540]: Removed session 8. May 12 23:59:56.345329 sshd[1866]: Accepted publickey for core from 147.75.109.163 port 45508 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 12 23:59:56.346011 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:59:56.348447 systemd-logind[1540]: New session 9 of user core. May 12 23:59:56.358946 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 23:59:56.408106 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 23:59:56.408327 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:59:56.713020 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 23:59:56.724042 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 23:59:56.975921 dockerd[1888]: time="2025-05-12T23:59:56.975675168Z" level=info msg="Starting up" May 12 23:59:56.978495 dockerd[1888]: time="2025-05-12T23:59:56.978476300Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 12 23:59:57.005426 dockerd[1888]: time="2025-05-12T23:59:57.005397689Z" level=info msg="Loading containers: start." May 12 23:59:57.098994 kernel: Initializing XFRM netlink socket May 12 23:59:57.100325 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. May 12 23:59:57.144142 systemd-networkd[1472]: docker0: Link UP May 12 23:59:57.172655 dockerd[1888]: time="2025-05-12T23:59:57.172553827Z" level=info msg="Loading containers: done." May 12 23:59:57.184064 dockerd[1888]: time="2025-05-12T23:59:57.184034613Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 23:59:57.184142 dockerd[1888]: time="2025-05-12T23:59:57.184105500Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 12 23:59:57.184187 dockerd[1888]: time="2025-05-12T23:59:57.184171818Z" level=info msg="Daemon has completed initialization" May 12 23:59:57.186204 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4126042865-merged.mount: Deactivated successfully. May 13 00:01:18.215068 systemd-resolved[1474]: Clock change detected. Flushing caches. May 13 00:01:18.215138 systemd-timesyncd[1475]: Contacted time server 208.113.130.146:123 (2.flatcar.pool.ntp.org). May 13 00:01:18.215176 systemd-timesyncd[1475]: Initial clock synchronization to Tue 2025-05-13 00:01:18.214986 UTC. May 13 00:01:18.222335 dockerd[1888]: time="2025-05-13T00:01:18.222276250Z" level=info msg="API listen on /run/docker.sock" May 13 00:01:18.222468 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:01:18.226166 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 13 00:01:18.238059 systemd[1]: logrotate.service: Deactivated successfully. May 13 00:01:19.350668 containerd[1563]: time="2025-05-13T00:01:19.350383510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:01:19.686433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 00:01:19.687550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:19.977462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:19.979911 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:01:20.026679 kubelet[2096]: E0513 00:01:20.026609 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:01:20.027908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:01:20.028004 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:01:20.028401 systemd[1]: kubelet.service: Consumed 95ms CPU time, 104.1M memory peak. May 13 00:01:20.242355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838033889.mount: Deactivated successfully. May 13 00:01:21.182374 containerd[1563]: time="2025-05-13T00:01:21.182346785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:21.183169 containerd[1563]: time="2025-05-13T00:01:21.183140106Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 00:01:21.183532 containerd[1563]: time="2025-05-13T00:01:21.183510989Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:21.184817 containerd[1563]: time="2025-05-13T00:01:21.184794349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:21.185489 containerd[1563]: time="2025-05-13T00:01:21.185372214Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.834959638s" May 13 00:01:21.185489 containerd[1563]: time="2025-05-13T00:01:21.185393409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 00:01:21.185717 containerd[1563]: time="2025-05-13T00:01:21.185704067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:01:22.489947 containerd[1563]: time="2025-05-13T00:01:22.489853167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:22.499917 containerd[1563]: time="2025-05-13T00:01:22.499868238Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 00:01:22.509576 containerd[1563]: time="2025-05-13T00:01:22.509542495Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:22.519445 containerd[1563]: time="2025-05-13T00:01:22.519396343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:22.520212 containerd[1563]: time="2025-05-13T00:01:22.519981473Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.334261242s" May 13 00:01:22.520212 containerd[1563]: time="2025-05-13T00:01:22.520006637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 00:01:22.520745 containerd[1563]: time="2025-05-13T00:01:22.520320164Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:01:23.785219 containerd[1563]: time="2025-05-13T00:01:23.785114186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:23.790863 containerd[1563]: time="2025-05-13T00:01:23.790818934Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 00:01:23.795738 containerd[1563]: time="2025-05-13T00:01:23.795702241Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:23.803798 containerd[1563]: time="2025-05-13T00:01:23.803745032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:23.804458 containerd[1563]: time="2025-05-13T00:01:23.804354025Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.284012639s" May 13 00:01:23.804458 containerd[1563]: time="2025-05-13T00:01:23.804378601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 00:01:23.804980 containerd[1563]: time="2025-05-13T00:01:23.804753675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:01:24.871836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708154963.mount: Deactivated successfully. May 13 00:01:25.266703 containerd[1563]: time="2025-05-13T00:01:25.266660179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:25.272698 containerd[1563]: time="2025-05-13T00:01:25.272552604Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 00:01:25.280631 containerd[1563]: time="2025-05-13T00:01:25.280569740Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:25.286403 containerd[1563]: time="2025-05-13T00:01:25.286367517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:25.286837 containerd[1563]: time="2025-05-13T00:01:25.286682875Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.481902429s" May 13 00:01:25.286837 containerd[1563]: time="2025-05-13T00:01:25.286709710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 00:01:25.287090 containerd[1563]: time="2025-05-13T00:01:25.287052400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:01:26.027278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878047401.mount: Deactivated successfully. May 13 00:01:26.873262 containerd[1563]: time="2025-05-13T00:01:26.873222005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:26.878402 containerd[1563]: time="2025-05-13T00:01:26.878351030Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 00:01:26.887434 containerd[1563]: time="2025-05-13T00:01:26.887380529Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:26.900155 containerd[1563]: time="2025-05-13T00:01:26.900127349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:26.900606 containerd[1563]: time="2025-05-13T00:01:26.900588244Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.613519187s" May 13 00:01:26.900642 containerd[1563]: time="2025-05-13T00:01:26.900609452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 00:01:26.901011 containerd[1563]: time="2025-05-13T00:01:26.900997793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:01:27.603150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659130445.mount: Deactivated successfully. May 13 00:01:27.605448 containerd[1563]: time="2025-05-13T00:01:27.605406260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:01:27.605871 containerd[1563]: time="2025-05-13T00:01:27.605841506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 00:01:27.605996 containerd[1563]: time="2025-05-13T00:01:27.605903750Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:01:27.607240 containerd[1563]: time="2025-05-13T00:01:27.607206671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:01:27.608214 containerd[1563]: time="2025-05-13T00:01:27.607677502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 706.664134ms" May 13 00:01:27.608214 containerd[1563]: time="2025-05-13T00:01:27.607696051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:01:27.608214 containerd[1563]: time="2025-05-13T00:01:27.608199934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:01:28.256358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342899115.mount: Deactivated successfully. May 13 00:01:30.186548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 00:01:30.188391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:31.302831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:31.306165 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:01:31.464275 kubelet[2285]: E0513 00:01:31.464180 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:01:31.465190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:01:31.465271 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:01:31.465699 systemd[1]: kubelet.service: Consumed 108ms CPU time, 102.4M memory peak. May 13 00:01:31.632720 update_engine[1541]: I20250513 00:01:31.632294 1541 update_attempter.cc:509] Updating boot flags... May 13 00:01:31.680610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2305) May 13 00:01:32.957648 containerd[1563]: time="2025-05-13T00:01:32.957618178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:32.958806 containerd[1563]: time="2025-05-13T00:01:32.958764553Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 00:01:32.959964 containerd[1563]: time="2025-05-13T00:01:32.959272005Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:32.960580 containerd[1563]: time="2025-05-13T00:01:32.960566317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:32.961205 containerd[1563]: time="2025-05-13T00:01:32.961192089Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.352968466s" May 13 00:01:32.961256 containerd[1563]: time="2025-05-13T00:01:32.961247888Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 00:01:34.962290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:34.962393 systemd[1]: kubelet.service: Consumed 108ms CPU time, 102.4M memory peak. May 13 00:01:34.963837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:34.985520 systemd[1]: Reload requested from client PID 2340 ('systemctl') (unit session-9.scope)... May 13 00:01:34.985531 systemd[1]: Reloading... May 13 00:01:35.051943 zram_generator::config[2388]: No configuration found. May 13 00:01:35.109208 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:01:35.130090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:01:35.215856 systemd[1]: Reloading finished in 230 ms. May 13 00:01:35.262412 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:01:35.262493 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:01:35.262693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:35.264506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:35.506506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:35.510089 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:01:35.592229 kubelet[2452]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:01:35.592229 kubelet[2452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:01:35.592229 kubelet[2452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:01:35.592518 kubelet[2452]: I0513 00:01:35.592278 2452 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:01:35.948757 kubelet[2452]: I0513 00:01:35.948462 2452 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:01:35.948757 kubelet[2452]: I0513 00:01:35.948488 2452 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:01:35.948757 kubelet[2452]: I0513 00:01:35.948670 2452 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:01:36.226185 kubelet[2452]: E0513 00:01:36.226109 2452 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:36.232650 kubelet[2452]: I0513 00:01:36.232619 2452 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:01:36.355493 kubelet[2452]: I0513 00:01:36.355461 2452 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 00:01:36.381670 kubelet[2452]: I0513 00:01:36.381625 2452 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:01:36.430816 kubelet[2452]: I0513 00:01:36.430767 2452 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:01:36.453738 kubelet[2452]: I0513 00:01:36.430812 2452 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:01:36.463511 kubelet[2452]: I0513 00:01:36.463478 2452 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:01:36.463511 kubelet[2452]: I0513 00:01:36.463511 2452 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:01:36.463676 kubelet[2452]: I0513 00:01:36.463655 2452 state_mem.go:36] "Initialized new in-memory state store" May 13 00:01:36.506007 kubelet[2452]: I0513 00:01:36.505898 2452 kubelet.go:446] "Attempting to sync node with API server" May 13 00:01:36.506007 kubelet[2452]: I0513 00:01:36.505933 2452 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:01:36.522950 kubelet[2452]: I0513 00:01:36.522602 2452 kubelet.go:352] "Adding apiserver pod source" May 13 00:01:36.522950 kubelet[2452]: I0513 00:01:36.522627 2452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:01:36.529324 kubelet[2452]: W0513 00:01:36.529294 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:36.529378 kubelet[2452]: E0513 00:01:36.529339 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:36.536889 kubelet[2452]: W0513 00:01:36.536647 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:36.536889 kubelet[2452]: E0513 00:01:36.536686 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:36.541940 kubelet[2452]: I0513 00:01:36.541869 2452 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 00:01:36.563770 kubelet[2452]: I0513 00:01:36.563653 2452 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:01:36.567947 kubelet[2452]: W0513 00:01:36.567764 2452 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:01:36.568398 kubelet[2452]: I0513 00:01:36.568378 2452 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:01:36.568431 kubelet[2452]: I0513 00:01:36.568415 2452 server.go:1287] "Started kubelet" May 13 00:01:36.608989 kubelet[2452]: I0513 00:01:36.608487 2452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:01:36.614057 kubelet[2452]: E0513 00:01:36.601513 2452 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eed3028ffbe33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:01:36.568393267 +0000 UTC m=+1.055936666,LastTimestamp:2025-05-13 00:01:36.568393267 +0000 UTC m=+1.055936666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:01:36.616417 kubelet[2452]: I0513 00:01:36.616372 2452 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:01:36.617035 kubelet[2452]: I0513 00:01:36.617025 2452 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:01:36.617312 kubelet[2452]: E0513 00:01:36.617301 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:36.619496 kubelet[2452]: I0513 00:01:36.619486 2452 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:01:36.619592 kubelet[2452]: I0513 00:01:36.619584 2452 reconciler.go:26] "Reconciler: start to sync state" May 13 00:01:36.631506 kubelet[2452]: E0513 00:01:36.631489 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" May 13 00:01:36.631879 kubelet[2452]: I0513 00:01:36.631765 2452 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:01:36.637462 kubelet[2452]: I0513 00:01:36.637423 2452 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:01:36.637940 kubelet[2452]: I0513 00:01:36.637638 2452 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:01:36.638045 kubelet[2452]: I0513 00:01:36.638035 2452 server.go:490] "Adding debug handlers to kubelet server" May 13 00:01:36.648946 kubelet[2452]: W0513 00:01:36.648880 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:36.649127 kubelet[2452]: E0513 00:01:36.649110 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:36.649957 kubelet[2452]: I0513 00:01:36.649944 2452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:01:36.659375 kubelet[2452]: I0513 00:01:36.659364 2452 factory.go:221] Registration of the containerd container factory successfully May 13 00:01:36.659434 kubelet[2452]: I0513 00:01:36.659427 2452 factory.go:221] Registration of the systemd container factory successfully May 13 00:01:36.684780 kubelet[2452]: I0513 00:01:36.684745 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:01:36.685471 kubelet[2452]: I0513 00:01:36.684960 2452 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:01:36.685471 kubelet[2452]: I0513 00:01:36.684974 2452 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:01:36.685471 kubelet[2452]: I0513 00:01:36.684993 2452 state_mem.go:36] "Initialized new in-memory state store" May 13 00:01:36.685685 kubelet[2452]: I0513 00:01:36.685652 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:01:36.685685 kubelet[2452]: I0513 00:01:36.685664 2452 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:01:36.685685 kubelet[2452]: I0513 00:01:36.685676 2452 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:01:36.685685 kubelet[2452]: I0513 00:01:36.685680 2452 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:01:36.685775 kubelet[2452]: E0513 00:01:36.685709 2452 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:01:36.687127 kubelet[2452]: W0513 00:01:36.686888 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:36.687127 kubelet[2452]: E0513 00:01:36.686937 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:36.703679 kubelet[2452]: I0513 00:01:36.703427 2452 policy_none.go:49] "None policy: Start" May 13 00:01:36.703679 kubelet[2452]: I0513 00:01:36.703460 2452 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:01:36.703679 kubelet[2452]: I0513 00:01:36.703473 2452 state_mem.go:35] "Initializing new in-memory state store" May 13 00:01:36.713512 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:01:36.718028 kubelet[2452]: E0513 00:01:36.717997 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:36.722911 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:01:36.725893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:01:36.740309 kubelet[2452]: I0513 00:01:36.740289 2452 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:01:36.740577 kubelet[2452]: I0513 00:01:36.740568 2452 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:01:36.740646 kubelet[2452]: I0513 00:01:36.740619 2452 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:01:36.740814 kubelet[2452]: I0513 00:01:36.740805 2452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:01:36.742203 kubelet[2452]: E0513 00:01:36.742186 2452 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:01:36.742543 kubelet[2452]: E0513 00:01:36.742531 2452 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:01:36.793120 systemd[1]: Created slice kubepods-burstable-pod7f53bfa230b9dc34e7b6e7e519c5d3d4.slice - libcontainer container kubepods-burstable-pod7f53bfa230b9dc34e7b6e7e519c5d3d4.slice. May 13 00:01:36.809906 kubelet[2452]: E0513 00:01:36.809765 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:36.812056 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:01:36.820184 kubelet[2452]: E0513 00:01:36.820168 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:36.822454 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:01:36.823624 kubelet[2452]: E0513 00:01:36.823608 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:36.832154 kubelet[2452]: E0513 00:01:36.832117 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" May 13 00:01:36.842312 kubelet[2452]: I0513 00:01:36.842080 2452 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:36.842504 kubelet[2452]: E0513 00:01:36.842488 2452 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 13 00:01:36.920174 kubelet[2452]: I0513 00:01:36.920040 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:36.920174 kubelet[2452]: I0513 00:01:36.920067 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:36.920174 kubelet[2452]: I0513 00:01:36.920077 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:36.920174 kubelet[2452]: I0513 00:01:36.920085 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:36.920174 kubelet[2452]: I0513 00:01:36.920096 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:36.920330 kubelet[2452]: I0513 00:01:36.920104 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:01:36.920330 kubelet[2452]: I0513 00:01:36.920112 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:36.920330 kubelet[2452]: I0513 00:01:36.920119 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:36.920330 kubelet[2452]: I0513 00:01:36.920128 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:37.044330 kubelet[2452]: I0513 00:01:37.044023 2452 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:37.044330 kubelet[2452]: E0513 00:01:37.044257 2452 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 13 00:01:37.111935 containerd[1563]: time="2025-05-13T00:01:37.111891506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f53bfa230b9dc34e7b6e7e519c5d3d4,Namespace:kube-system,Attempt:0,}" May 13 00:01:37.122196 containerd[1563]: time="2025-05-13T00:01:37.121130397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:01:37.126955 containerd[1563]: time="2025-05-13T00:01:37.125646705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:01:37.233294 kubelet[2452]: E0513 00:01:37.233262 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" May 13 00:01:37.259719 containerd[1563]: time="2025-05-13T00:01:37.259641601Z" level=info msg="connecting to shim c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a" address="unix:///run/containerd/s/b9a3e59658ff9a79d5d44f50ed1eab9082c78b6649a88144d7c7c3e251067079" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:37.262348 containerd[1563]: time="2025-05-13T00:01:37.261980586Z" level=info msg="connecting to shim fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b" address="unix:///run/containerd/s/c14061334b6823748281a954326204ca26acd45f4a314e5c6d1826f7ad66c2ba" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:37.264017 containerd[1563]: time="2025-05-13T00:01:37.263975266Z" level=info msg="connecting to shim 238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8" address="unix:///run/containerd/s/ad243d495da47d6774cbf5e7c41b1399423a675e5624f99b10b2be7997c272f5" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:37.368047 systemd[1]: Started cri-containerd-238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8.scope - libcontainer container 238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8. May 13 00:01:37.370407 systemd[1]: Started cri-containerd-c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a.scope - libcontainer container c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a. May 13 00:01:37.373137 systemd[1]: Started cri-containerd-fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b.scope - libcontainer container fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b. May 13 00:01:37.445054 kubelet[2452]: I0513 00:01:37.445018 2452 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:37.445244 kubelet[2452]: E0513 00:01:37.445229 2452 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 13 00:01:37.517716 containerd[1563]: time="2025-05-13T00:01:37.517654934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b\"" May 13 00:01:37.519656 containerd[1563]: time="2025-05-13T00:01:37.519264988Z" level=info msg="CreateContainer within sandbox \"fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:01:37.551622 kubelet[2452]: W0513 00:01:37.551579 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:37.551709 kubelet[2452]: E0513 00:01:37.551628 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:37.556016 kubelet[2452]: W0513 00:01:37.555994 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:37.556055 kubelet[2452]: E0513 00:01:37.556020 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:37.561042 containerd[1563]: time="2025-05-13T00:01:37.561019033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f53bfa230b9dc34e7b6e7e519c5d3d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8\"" May 13 00:01:37.562505 containerd[1563]: time="2025-05-13T00:01:37.562489417Z" level=info msg="CreateContainer within sandbox \"238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:01:37.590441 containerd[1563]: time="2025-05-13T00:01:37.590419739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a\"" May 13 00:01:37.591878 containerd[1563]: time="2025-05-13T00:01:37.591858513Z" level=info msg="CreateContainer within sandbox \"c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:01:37.623898 kubelet[2452]: W0513 00:01:37.623795 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:37.623898 kubelet[2452]: E0513 00:01:37.623837 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:37.783554 kubelet[2452]: W0513 00:01:37.783494 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:37.783680 kubelet[2452]: E0513 00:01:37.783661 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:37.889455 containerd[1563]: time="2025-05-13T00:01:37.888735325Z" level=info msg="Container 40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:37.948094 containerd[1563]: time="2025-05-13T00:01:37.948069490Z" level=info msg="Container 50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:38.034336 kubelet[2452]: E0513 00:01:38.034305 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" May 13 00:01:38.037741 containerd[1563]: time="2025-05-13T00:01:38.037672777Z" level=info msg="Container 0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:38.073225 containerd[1563]: time="2025-05-13T00:01:38.073129013Z" level=info msg="CreateContainer within sandbox \"fa0ad5c7880987852d01a9d3da14b4da7343a6fbb12f251b7e2014345c74ed3b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355\"" May 13 00:01:38.073751 containerd[1563]: time="2025-05-13T00:01:38.073729638Z" level=info msg="StartContainer for \"40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355\"" May 13 00:01:38.083867 containerd[1563]: time="2025-05-13T00:01:38.083826231Z" level=info msg="connecting to shim 40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355" address="unix:///run/containerd/s/c14061334b6823748281a954326204ca26acd45f4a314e5c6d1826f7ad66c2ba" protocol=ttrpc version=3 May 13 00:01:38.101865 containerd[1563]: time="2025-05-13T00:01:38.101842513Z" level=info msg="CreateContainer within sandbox \"238c478ebf79d86e8832fab105fac06072dac4c944a9bced2130d28d986bc2a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb\"" May 13 00:01:38.102344 containerd[1563]: time="2025-05-13T00:01:38.102242560Z" level=info msg="StartContainer for \"50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb\"" May 13 00:01:38.102475 containerd[1563]: time="2025-05-13T00:01:38.102432159Z" level=info msg="CreateContainer within sandbox \"c59879282a46e854f99c243527b87ef621ab33abc54e76bb5a6d7fa804b7ed2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a\"" May 13 00:01:38.102757 containerd[1563]: time="2025-05-13T00:01:38.102677365Z" level=info msg="StartContainer for \"0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a\"" May 13 00:01:38.103056 systemd[1]: Started cri-containerd-40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355.scope - libcontainer container 40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355. May 13 00:01:38.103805 containerd[1563]: time="2025-05-13T00:01:38.103463233Z" level=info msg="connecting to shim 0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a" address="unix:///run/containerd/s/b9a3e59658ff9a79d5d44f50ed1eab9082c78b6649a88144d7c7c3e251067079" protocol=ttrpc version=3 May 13 00:01:38.105688 containerd[1563]: time="2025-05-13T00:01:38.105508984Z" level=info msg="connecting to shim 50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb" address="unix:///run/containerd/s/ad243d495da47d6774cbf5e7c41b1399423a675e5624f99b10b2be7997c272f5" protocol=ttrpc version=3 May 13 00:01:38.119225 systemd[1]: Started cri-containerd-0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a.scope - libcontainer container 0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a. May 13 00:01:38.136196 systemd[1]: Started cri-containerd-50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb.scope - libcontainer container 50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb. May 13 00:01:38.180563 containerd[1563]: time="2025-05-13T00:01:38.180418103Z" level=info msg="StartContainer for \"40b04a70e934896d075d475c60839a4fadb6c6976721f7ca14d76b0cb48aa355\" returns successfully" May 13 00:01:38.213139 containerd[1563]: time="2025-05-13T00:01:38.212156237Z" level=info msg="StartContainer for \"0c44ae5563082ee05369898d6ec90657d905e4cd2a92c3ef1b5a918b4ea48f9a\" returns successfully" May 13 00:01:38.213369 containerd[1563]: time="2025-05-13T00:01:38.213349050Z" level=info msg="StartContainer for \"50cd563c71890f363b57c26991abb4b6fbef5d8cc1e609d2e74b40373058e6cb\" returns successfully" May 13 00:01:38.246629 kubelet[2452]: I0513 00:01:38.246615 2452 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:38.247002 kubelet[2452]: E0513 00:01:38.246988 2452 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 13 00:01:38.419418 kubelet[2452]: E0513 00:01:38.419386 2452 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:38.693359 kubelet[2452]: E0513 00:01:38.693206 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:38.694322 kubelet[2452]: E0513 00:01:38.694193 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:38.695270 kubelet[2452]: E0513 00:01:38.695256 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:39.495201 kubelet[2452]: W0513 00:01:39.495154 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:39.495201 kubelet[2452]: E0513 00:01:39.495183 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:39.588410 kubelet[2452]: W0513 00:01:39.588362 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 13 00:01:39.588410 kubelet[2452]: E0513 00:01:39.588389 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 13 00:01:39.635328 kubelet[2452]: E0513 00:01:39.635294 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="3.2s" May 13 00:01:39.697489 kubelet[2452]: E0513 00:01:39.697325 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:39.698106 kubelet[2452]: E0513 00:01:39.697956 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:39.698106 kubelet[2452]: E0513 00:01:39.698044 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:39.848748 kubelet[2452]: I0513 00:01:39.848357 2452 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:40.699085 kubelet[2452]: E0513 00:01:40.698823 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:40.699085 kubelet[2452]: E0513 00:01:40.699005 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:40.995061 kubelet[2452]: I0513 00:01:40.994857 2452 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:01:40.995061 kubelet[2452]: E0513 00:01:40.994887 2452 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 00:01:40.997667 kubelet[2452]: E0513 00:01:40.997645 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.098642 kubelet[2452]: E0513 00:01:41.098605 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.198982 kubelet[2452]: E0513 00:01:41.198915 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.299898 kubelet[2452]: E0513 00:01:41.299724 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.400517 kubelet[2452]: E0513 00:01:41.400485 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.501334 kubelet[2452]: E0513 00:01:41.501302 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.601624 kubelet[2452]: E0513 00:01:41.601549 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.699774 kubelet[2452]: E0513 00:01:41.699686 2452 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:01:41.701809 kubelet[2452]: E0513 00:01:41.701794 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.802336 kubelet[2452]: E0513 00:01:41.802301 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:41.903178 kubelet[2452]: E0513 00:01:41.903101 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.003665 kubelet[2452]: E0513 00:01:42.003623 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.104643 kubelet[2452]: E0513 00:01:42.104610 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.205405 kubelet[2452]: E0513 00:01:42.205380 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.306104 kubelet[2452]: E0513 00:01:42.306073 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.406907 kubelet[2452]: E0513 00:01:42.406876 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.507716 kubelet[2452]: E0513 00:01:42.507640 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.608770 kubelet[2452]: E0513 00:01:42.608730 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.709531 kubelet[2452]: E0513 00:01:42.709504 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.793346 systemd[1]: Reload requested from client PID 2723 ('systemctl') (unit session-9.scope)... May 13 00:01:42.793356 systemd[1]: Reloading... May 13 00:01:42.809586 kubelet[2452]: E0513 00:01:42.809562 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.859941 zram_generator::config[2767]: No configuration found. May 13 00:01:42.910132 kubelet[2452]: E0513 00:01:42.910107 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:42.930603 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:01:42.956970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:01:43.010718 kubelet[2452]: E0513 00:01:43.010685 2452 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:43.060799 systemd[1]: Reloading finished in 267 ms. May 13 00:01:43.078656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:43.090614 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:01:43.090776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:43.090818 systemd[1]: kubelet.service: Consumed 573ms CPU time, 126.3M memory peak. May 13 00:01:43.092176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:01:43.458611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:01:43.467227 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:01:43.512945 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:01:43.512945 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:01:43.512945 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:01:43.513181 kubelet[2835]: I0513 00:01:43.512970 2835 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:01:43.518022 kubelet[2835]: I0513 00:01:43.517999 2835 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:01:43.518022 kubelet[2835]: I0513 00:01:43.518017 2835 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:01:43.518193 kubelet[2835]: I0513 00:01:43.518182 2835 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:01:43.521836 kubelet[2835]: I0513 00:01:43.520364 2835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:01:43.522798 kubelet[2835]: I0513 00:01:43.522787 2835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:01:43.525239 kubelet[2835]: I0513 00:01:43.525231 2835 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 00:01:43.527546 kubelet[2835]: I0513 00:01:43.527530 2835 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:01:43.527668 kubelet[2835]: I0513 00:01:43.527647 2835 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:01:43.527776 kubelet[2835]: I0513 00:01:43.527671 2835 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:01:43.527845 kubelet[2835]: I0513 00:01:43.527777 2835 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:01:43.527845 kubelet[2835]: I0513 00:01:43.527783 2835 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:01:43.527845 kubelet[2835]: I0513 00:01:43.527810 2835 state_mem.go:36] "Initialized new in-memory state store" May 13 00:01:43.528246 kubelet[2835]: I0513 00:01:43.527930 2835 kubelet.go:446] "Attempting to sync node with API server" May 13 00:01:43.528246 kubelet[2835]: I0513 00:01:43.527943 2835 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:01:43.528246 kubelet[2835]: I0513 00:01:43.527956 2835 kubelet.go:352] "Adding apiserver pod source" May 13 00:01:43.528246 kubelet[2835]: I0513 00:01:43.527962 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:01:43.528481 kubelet[2835]: I0513 00:01:43.528473 2835 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 00:01:43.528946 kubelet[2835]: I0513 00:01:43.528739 2835 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:01:43.529069 kubelet[2835]: I0513 00:01:43.529062 2835 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:01:43.529125 kubelet[2835]: I0513 00:01:43.529119 2835 server.go:1287] "Started kubelet" May 13 00:01:43.531512 kubelet[2835]: I0513 00:01:43.531458 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:01:43.535015 kubelet[2835]: I0513 00:01:43.535005 2835 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:01:43.540943 kubelet[2835]: I0513 00:01:43.535340 2835 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:01:43.542110 kubelet[2835]: I0513 00:01:43.541831 2835 server.go:490] "Adding debug handlers to kubelet server" May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.538436 2835 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:01:43.543888 kubelet[2835]: E0513 00:01:43.538545 2835 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.539336 2835 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.543628 2835 reconciler.go:26] "Reconciler: start to sync state" May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.535375 2835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.543728 2835 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:01:43.543888 kubelet[2835]: I0513 00:01:43.540863 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:01:43.544931 kubelet[2835]: I0513 00:01:43.544385 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:01:43.545511 kubelet[2835]: I0513 00:01:43.545502 2835 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:01:43.545580 kubelet[2835]: I0513 00:01:43.545519 2835 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:01:43.545580 kubelet[2835]: I0513 00:01:43.545523 2835 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:01:43.545580 kubelet[2835]: E0513 00:01:43.545549 2835 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:01:43.555617 kubelet[2835]: I0513 00:01:43.555602 2835 factory.go:221] Registration of the systemd container factory successfully May 13 00:01:43.555755 kubelet[2835]: I0513 00:01:43.555745 2835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:01:43.563309 kubelet[2835]: I0513 00:01:43.561412 2835 factory.go:221] Registration of the containerd container factory successfully May 13 00:01:43.565099 kubelet[2835]: E0513 00:01:43.565074 2835 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:01:43.596334 kubelet[2835]: I0513 00:01:43.596315 2835 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:01:43.596334 kubelet[2835]: I0513 00:01:43.596326 2835 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:01:43.596334 kubelet[2835]: I0513 00:01:43.596336 2835 state_mem.go:36] "Initialized new in-memory state store" May 13 00:01:43.596455 kubelet[2835]: I0513 00:01:43.596432 2835 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:01:43.596455 kubelet[2835]: I0513 00:01:43.596439 2835 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:01:43.596455 kubelet[2835]: I0513 00:01:43.596451 2835 policy_none.go:49] "None policy: Start" May 13 00:01:43.596455 kubelet[2835]: I0513 00:01:43.596455 2835 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:01:43.596517 kubelet[2835]: I0513 00:01:43.596462 2835 state_mem.go:35] "Initializing new in-memory state store" May 13 00:01:43.596552 kubelet[2835]: I0513 00:01:43.596540 2835 state_mem.go:75] "Updated machine memory state" May 13 00:01:43.599295 kubelet[2835]: I0513 00:01:43.598932 2835 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:01:43.599295 kubelet[2835]: I0513 00:01:43.599027 2835 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:01:43.599295 kubelet[2835]: I0513 00:01:43.599033 2835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:01:43.599295 kubelet[2835]: I0513 00:01:43.599179 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:01:43.600580 kubelet[2835]: E0513 00:01:43.600569 2835 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:01:43.647505 kubelet[2835]: I0513 00:01:43.647483 2835 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:01:43.647895 kubelet[2835]: I0513 00:01:43.647483 2835 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:01:43.648014 kubelet[2835]: I0513 00:01:43.648006 2835 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.701116 kubelet[2835]: I0513 00:01:43.701096 2835 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:01:43.705716 kubelet[2835]: I0513 00:01:43.705239 2835 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:01:43.705716 kubelet[2835]: I0513 00:01:43.705287 2835 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:01:43.744803 kubelet[2835]: I0513 00:01:43.744726 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:43.744803 kubelet[2835]: I0513 00:01:43.744756 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.744803 kubelet[2835]: I0513 00:01:43.744769 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:01:43.744803 kubelet[2835]: I0513 00:01:43.744780 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:43.744803 kubelet[2835]: I0513 00:01:43.744789 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.745087 kubelet[2835]: I0513 00:01:43.744797 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f53bfa230b9dc34e7b6e7e519c5d3d4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f53bfa230b9dc34e7b6e7e519c5d3d4\") " pod="kube-system/kube-apiserver-localhost" May 13 00:01:43.745087 kubelet[2835]: I0513 00:01:43.744806 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.745087 kubelet[2835]: I0513 00:01:43.744815 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.745087 kubelet[2835]: I0513 00:01:43.744824 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:01:43.796023 sudo[2868]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:01:43.796205 sudo[2868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 00:01:44.225870 sudo[2868]: pam_unix(sudo:session): session closed for user root May 13 00:01:44.533029 kubelet[2835]: I0513 00:01:44.532967 2835 apiserver.go:52] "Watching apiserver" May 13 00:01:44.543769 kubelet[2835]: I0513 00:01:44.543748 2835 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:01:44.584680 kubelet[2835]: I0513 00:01:44.584663 2835 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:01:44.591986 kubelet[2835]: E0513 00:01:44.591964 2835 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:01:44.604554 kubelet[2835]: I0513 00:01:44.604448 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6044370319999999 podStartE2EDuration="1.604437032s" podCreationTimestamp="2025-05-13 00:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:01:44.600485218 +0000 UTC m=+1.116368314" watchObservedRunningTime="2025-05-13 00:01:44.604437032 +0000 UTC m=+1.120320125" May 13 00:01:44.609296 kubelet[2835]: I0513 00:01:44.609203 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.609193246 podStartE2EDuration="1.609193246s" podCreationTimestamp="2025-05-13 00:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:01:44.609036652 +0000 UTC m=+1.124919750" watchObservedRunningTime="2025-05-13 00:01:44.609193246 +0000 UTC m=+1.125076340" May 13 00:01:44.609682 kubelet[2835]: I0513 00:01:44.609655 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.60928502 podStartE2EDuration="1.60928502s" podCreationTimestamp="2025-05-13 00:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:01:44.604912219 +0000 UTC m=+1.120795317" watchObservedRunningTime="2025-05-13 00:01:44.60928502 +0000 UTC m=+1.125168117" May 13 00:01:45.529732 sudo[1870]: pam_unix(sudo:session): session closed for user root May 13 00:01:45.530662 sshd[1869]: Connection closed by 147.75.109.163 port 45508 May 13 00:01:45.534521 sshd-session[1866]: pam_unix(sshd:session): session closed for user core May 13 00:01:45.536961 systemd[1]: sshd@6-139.178.70.99:22-147.75.109.163:45508.service: Deactivated successfully. May 13 00:01:45.538164 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:01:45.538282 systemd[1]: session-9.scope: Consumed 3.108s CPU time, 210.2M memory peak. May 13 00:01:45.539049 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. May 13 00:01:45.539873 systemd-logind[1540]: Removed session 9. May 13 00:01:48.606822 kubelet[2835]: I0513 00:01:48.606740 2835 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:01:48.607273 kubelet[2835]: I0513 00:01:48.607140 2835 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:01:48.607312 containerd[1563]: time="2025-05-13T00:01:48.607003931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:01:49.255998 systemd[1]: Created slice kubepods-besteffort-poddba8cb2a_1f67_4c33_bae0_4b9059ade0b6.slice - libcontainer container kubepods-besteffort-poddba8cb2a_1f67_4c33_bae0_4b9059ade0b6.slice. May 13 00:01:49.266457 systemd[1]: Created slice kubepods-burstable-pod95574585_119d_4c26_add6_806627db6d54.slice - libcontainer container kubepods-burstable-pod95574585_119d_4c26_add6_806627db6d54.slice. May 13 00:01:49.279366 kubelet[2835]: I0513 00:01:49.279345 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-bpf-maps\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279493 kubelet[2835]: I0513 00:01:49.279485 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-hubble-tls\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279553 kubelet[2835]: I0513 00:01:49.279546 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dba8cb2a-1f67-4c33-bae0-4b9059ade0b6-kube-proxy\") pod \"kube-proxy-vqhh8\" (UID: \"dba8cb2a-1f67-4c33-bae0-4b9059ade0b6\") " pod="kube-system/kube-proxy-vqhh8" May 13 00:01:49.279607 kubelet[2835]: I0513 00:01:49.279601 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-run\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279666 kubelet[2835]: I0513 00:01:49.279660 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-cgroup\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279716 kubelet[2835]: I0513 00:01:49.279703 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jfm\" (UniqueName: \"kubernetes.io/projected/dba8cb2a-1f67-4c33-bae0-4b9059ade0b6-kube-api-access-t6jfm\") pod \"kube-proxy-vqhh8\" (UID: \"dba8cb2a-1f67-4c33-bae0-4b9059ade0b6\") " pod="kube-system/kube-proxy-vqhh8" May 13 00:01:49.279768 kubelet[2835]: I0513 00:01:49.279762 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95574585-119d-4c26-add6-806627db6d54-clustermesh-secrets\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279807 kubelet[2835]: I0513 00:01:49.279802 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95574585-119d-4c26-add6-806627db6d54-cilium-config-path\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279877 kubelet[2835]: I0513 00:01:49.279870 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-hostproc\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279917 kubelet[2835]: I0513 00:01:49.279910 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-etc-cni-netd\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.279999 kubelet[2835]: I0513 00:01:49.279993 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-xtables-lock\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.280052 kubelet[2835]: I0513 00:01:49.280035 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-lib-modules\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.280125 kubelet[2835]: I0513 00:01:49.280089 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-kernel\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.280249 kubelet[2835]: I0513 00:01:49.280163 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkm7\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-kube-api-access-zhkm7\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.280249 kubelet[2835]: I0513 00:01:49.280175 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dba8cb2a-1f67-4c33-bae0-4b9059ade0b6-xtables-lock\") pod \"kube-proxy-vqhh8\" (UID: \"dba8cb2a-1f67-4c33-bae0-4b9059ade0b6\") " pod="kube-system/kube-proxy-vqhh8" May 13 00:01:49.280249 kubelet[2835]: I0513 00:01:49.280196 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dba8cb2a-1f67-4c33-bae0-4b9059ade0b6-lib-modules\") pod \"kube-proxy-vqhh8\" (UID: \"dba8cb2a-1f67-4c33-bae0-4b9059ade0b6\") " pod="kube-system/kube-proxy-vqhh8" May 13 00:01:49.280249 kubelet[2835]: I0513 00:01:49.280218 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cni-path\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.280249 kubelet[2835]: I0513 00:01:49.280227 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-net\") pod \"cilium-hz88k\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " pod="kube-system/cilium-hz88k" May 13 00:01:49.565256 containerd[1563]: time="2025-05-13T00:01:49.565018667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqhh8,Uid:dba8cb2a-1f67-4c33-bae0-4b9059ade0b6,Namespace:kube-system,Attempt:0,}" May 13 00:01:49.570757 containerd[1563]: time="2025-05-13T00:01:49.570514794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hz88k,Uid:95574585-119d-4c26-add6-806627db6d54,Namespace:kube-system,Attempt:0,}" May 13 00:01:49.578996 containerd[1563]: time="2025-05-13T00:01:49.578907902Z" level=info msg="connecting to shim 08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71" address="unix:///run/containerd/s/f34a1517187f4081b591d3a06fb96659109e9409a6ff3b1f7513b6e8e0e3ca25" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:49.596089 systemd[1]: Started cri-containerd-08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71.scope - libcontainer container 08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71. May 13 00:01:49.618560 containerd[1563]: time="2025-05-13T00:01:49.618458830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqhh8,Uid:dba8cb2a-1f67-4c33-bae0-4b9059ade0b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71\"" May 13 00:01:49.621388 containerd[1563]: time="2025-05-13T00:01:49.621368469Z" level=info msg="connecting to shim d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:49.621844 containerd[1563]: time="2025-05-13T00:01:49.621615203Z" level=info msg="CreateContainer within sandbox \"08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:01:49.631622 containerd[1563]: time="2025-05-13T00:01:49.630913826Z" level=info msg="Container 806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:49.635323 containerd[1563]: time="2025-05-13T00:01:49.635299707Z" level=info msg="CreateContainer within sandbox \"08b911583e0168167bd6452ce09c375110a1d508ea58c799aa5f6e55c6a38e71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b\"" May 13 00:01:49.636182 containerd[1563]: time="2025-05-13T00:01:49.636165236Z" level=info msg="StartContainer for \"806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b\"" May 13 00:01:49.638403 containerd[1563]: time="2025-05-13T00:01:49.638378251Z" level=info msg="connecting to shim 806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b" address="unix:///run/containerd/s/f34a1517187f4081b591d3a06fb96659109e9409a6ff3b1f7513b6e8e0e3ca25" protocol=ttrpc version=3 May 13 00:01:49.645201 systemd[1]: Started cri-containerd-d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e.scope - libcontainer container d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e. May 13 00:01:49.660468 systemd[1]: Started cri-containerd-806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b.scope - libcontainer container 806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b. May 13 00:01:49.666710 systemd[1]: Created slice kubepods-besteffort-podc28b31b5_51a1_415e_a1b3_96b7cab69362.slice - libcontainer container kubepods-besteffort-podc28b31b5_51a1_415e_a1b3_96b7cab69362.slice. May 13 00:01:49.683479 kubelet[2835]: I0513 00:01:49.683448 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65x9r\" (UniqueName: \"kubernetes.io/projected/c28b31b5-51a1-415e-a1b3-96b7cab69362-kube-api-access-65x9r\") pod \"cilium-operator-6c4d7847fc-ms2vd\" (UID: \"c28b31b5-51a1-415e-a1b3-96b7cab69362\") " pod="kube-system/cilium-operator-6c4d7847fc-ms2vd" May 13 00:01:49.683479 kubelet[2835]: I0513 00:01:49.683475 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c28b31b5-51a1-415e-a1b3-96b7cab69362-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ms2vd\" (UID: \"c28b31b5-51a1-415e-a1b3-96b7cab69362\") " pod="kube-system/cilium-operator-6c4d7847fc-ms2vd" May 13 00:01:49.701815 containerd[1563]: time="2025-05-13T00:01:49.701749574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hz88k,Uid:95574585-119d-4c26-add6-806627db6d54,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\"" May 13 00:01:49.704625 containerd[1563]: time="2025-05-13T00:01:49.704492833Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:01:49.740200 containerd[1563]: time="2025-05-13T00:01:49.740178675Z" level=info msg="StartContainer for \"806e3d249edf88eea212d3d1582be5960f625a12ad03a82926782d68bf61a58b\" returns successfully" May 13 00:01:49.971154 containerd[1563]: time="2025-05-13T00:01:49.971065616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ms2vd,Uid:c28b31b5-51a1-415e-a1b3-96b7cab69362,Namespace:kube-system,Attempt:0,}" May 13 00:01:50.014388 containerd[1563]: time="2025-05-13T00:01:50.013869775Z" level=info msg="connecting to shim 570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b" address="unix:///run/containerd/s/2b443bacb3535b2db470a5a736b957b771777490bb06f1cd899cad2ac695aa85" namespace=k8s.io protocol=ttrpc version=3 May 13 00:01:50.033396 systemd[1]: Started cri-containerd-570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b.scope - libcontainer container 570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b. May 13 00:01:50.080137 containerd[1563]: time="2025-05-13T00:01:50.080108844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ms2vd,Uid:c28b31b5-51a1-415e-a1b3-96b7cab69362,Namespace:kube-system,Attempt:0,} returns sandbox id \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\"" May 13 00:01:50.614048 kubelet[2835]: I0513 00:01:50.613910 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqhh8" podStartSLOduration=1.613896153 podStartE2EDuration="1.613896153s" podCreationTimestamp="2025-05-13 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:01:50.613662276 +0000 UTC m=+7.129545377" watchObservedRunningTime="2025-05-13 00:01:50.613896153 +0000 UTC m=+7.129779246" May 13 00:01:54.890963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532038896.mount: Deactivated successfully. May 13 00:01:57.212801 containerd[1563]: time="2025-05-13T00:01:57.212765452Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:57.218940 containerd[1563]: time="2025-05-13T00:01:57.218887450Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 00:01:57.226383 containerd[1563]: time="2025-05-13T00:01:57.225337304Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:57.226383 containerd[1563]: time="2025-05-13T00:01:57.226143548Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.521608528s" May 13 00:01:57.226383 containerd[1563]: time="2025-05-13T00:01:57.226164192Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:01:57.226934 containerd[1563]: time="2025-05-13T00:01:57.226912235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:01:57.236067 containerd[1563]: time="2025-05-13T00:01:57.236044140Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:01:57.321231 containerd[1563]: time="2025-05-13T00:01:57.321200715Z" level=info msg="Container c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:57.332621 containerd[1563]: time="2025-05-13T00:01:57.332582788Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\"" May 13 00:01:57.333912 containerd[1563]: time="2025-05-13T00:01:57.333278752Z" level=info msg="StartContainer for \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\"" May 13 00:01:57.334524 containerd[1563]: time="2025-05-13T00:01:57.334506653Z" level=info msg="connecting to shim c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" protocol=ttrpc version=3 May 13 00:01:57.370600 systemd[1]: Started cri-containerd-c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0.scope - libcontainer container c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0. May 13 00:01:57.458711 systemd[1]: cri-containerd-c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0.scope: Deactivated successfully. May 13 00:01:57.491151 containerd[1563]: time="2025-05-13T00:01:57.490933080Z" level=info msg="StartContainer for \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" returns successfully" May 13 00:01:57.555948 containerd[1563]: time="2025-05-13T00:01:57.555532749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" id:\"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" pid:3246 exited_at:{seconds:1747094517 nanos:459608237}" May 13 00:01:57.558378 containerd[1563]: time="2025-05-13T00:01:57.558358381Z" level=info msg="received exit event container_id:\"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" id:\"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" pid:3246 exited_at:{seconds:1747094517 nanos:459608237}" May 13 00:01:57.598735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0-rootfs.mount: Deactivated successfully. May 13 00:01:58.671645 containerd[1563]: time="2025-05-13T00:01:58.671617287Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:01:58.686246 containerd[1563]: time="2025-05-13T00:01:58.686121528Z" level=info msg="Container 5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:58.692478 containerd[1563]: time="2025-05-13T00:01:58.692452100Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\"" May 13 00:01:58.693931 containerd[1563]: time="2025-05-13T00:01:58.692864074Z" level=info msg="StartContainer for \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\"" May 13 00:01:58.693931 containerd[1563]: time="2025-05-13T00:01:58.693358223Z" level=info msg="connecting to shim 5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" protocol=ttrpc version=3 May 13 00:01:58.719236 systemd[1]: Started cri-containerd-5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649.scope - libcontainer container 5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649. May 13 00:01:58.756346 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:01:58.756846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:01:58.756977 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:01:58.761899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:01:58.763445 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:01:58.766955 systemd[1]: cri-containerd-5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649.scope: Deactivated successfully. May 13 00:01:58.771454 containerd[1563]: time="2025-05-13T00:01:58.771095164Z" level=info msg="StartContainer for \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" returns successfully" May 13 00:01:58.771454 containerd[1563]: time="2025-05-13T00:01:58.771267488Z" level=info msg="received exit event container_id:\"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" id:\"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" pid:3291 exited_at:{seconds:1747094518 nanos:767537166}" May 13 00:01:58.772649 containerd[1563]: time="2025-05-13T00:01:58.771598950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" id:\"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" pid:3291 exited_at:{seconds:1747094518 nanos:767537166}" May 13 00:01:58.826971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:01:59.188779 containerd[1563]: time="2025-05-13T00:01:59.188393231Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:59.189087 containerd[1563]: time="2025-05-13T00:01:59.189064045Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 00:01:59.192083 containerd[1563]: time="2025-05-13T00:01:59.192071578Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:01:59.193032 containerd[1563]: time="2025-05-13T00:01:59.192865974Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.965850649s" May 13 00:01:59.193032 containerd[1563]: time="2025-05-13T00:01:59.192883157Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:01:59.195051 containerd[1563]: time="2025-05-13T00:01:59.195025179Z" level=info msg="CreateContainer within sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:01:59.200336 containerd[1563]: time="2025-05-13T00:01:59.200309843Z" level=info msg="Container 0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:59.231018 containerd[1563]: time="2025-05-13T00:01:59.230948719Z" level=info msg="CreateContainer within sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\"" May 13 00:01:59.232145 containerd[1563]: time="2025-05-13T00:01:59.231440588Z" level=info msg="StartContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\"" May 13 00:01:59.232145 containerd[1563]: time="2025-05-13T00:01:59.231938375Z" level=info msg="connecting to shim 0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5" address="unix:///run/containerd/s/2b443bacb3535b2db470a5a736b957b771777490bb06f1cd899cad2ac695aa85" protocol=ttrpc version=3 May 13 00:01:59.246037 systemd[1]: Started cri-containerd-0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5.scope - libcontainer container 0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5. May 13 00:01:59.266319 containerd[1563]: time="2025-05-13T00:01:59.266262303Z" level=info msg="StartContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" returns successfully" May 13 00:01:59.674407 containerd[1563]: time="2025-05-13T00:01:59.674342136Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:01:59.682837 containerd[1563]: time="2025-05-13T00:01:59.682507513Z" level=info msg="Container bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746: CDI devices from CRI Config.CDIDevices: []" May 13 00:01:59.688190 containerd[1563]: time="2025-05-13T00:01:59.688121205Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\"" May 13 00:01:59.688518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649-rootfs.mount: Deactivated successfully. May 13 00:01:59.689916 containerd[1563]: time="2025-05-13T00:01:59.688530396Z" level=info msg="StartContainer for \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\"" May 13 00:01:59.694629 containerd[1563]: time="2025-05-13T00:01:59.694525321Z" level=info msg="connecting to shim bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" protocol=ttrpc version=3 May 13 00:01:59.716068 systemd[1]: Started cri-containerd-bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746.scope - libcontainer container bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746. May 13 00:01:59.764090 containerd[1563]: time="2025-05-13T00:01:59.764015767Z" level=info msg="StartContainer for \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" returns successfully" May 13 00:01:59.771582 systemd[1]: cri-containerd-bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746.scope: Deactivated successfully. May 13 00:01:59.772030 containerd[1563]: time="2025-05-13T00:01:59.771899392Z" level=info msg="received exit event container_id:\"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" id:\"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" pid:3383 exited_at:{seconds:1747094519 nanos:771789562}" May 13 00:01:59.772190 containerd[1563]: time="2025-05-13T00:01:59.772178832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" id:\"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" pid:3383 exited_at:{seconds:1747094519 nanos:771789562}" May 13 00:01:59.796424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746-rootfs.mount: Deactivated successfully. May 13 00:02:00.679882 containerd[1563]: time="2025-05-13T00:02:00.679831929Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:02:00.686001 containerd[1563]: time="2025-05-13T00:02:00.685643501Z" level=info msg="Container 4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d: CDI devices from CRI Config.CDIDevices: []" May 13 00:02:00.689663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790367884.mount: Deactivated successfully. May 13 00:02:00.690431 containerd[1563]: time="2025-05-13T00:02:00.690415565Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\"" May 13 00:02:00.691874 containerd[1563]: time="2025-05-13T00:02:00.691358827Z" level=info msg="StartContainer for \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\"" May 13 00:02:00.691874 containerd[1563]: time="2025-05-13T00:02:00.691778850Z" level=info msg="connecting to shim 4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" protocol=ttrpc version=3 May 13 00:02:00.695259 kubelet[2835]: I0513 00:02:00.694266 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ms2vd" podStartSLOduration=2.5817529280000002 podStartE2EDuration="11.694248485s" podCreationTimestamp="2025-05-13 00:01:49 +0000 UTC" firstStartedPulling="2025-05-13 00:01:50.080821399 +0000 UTC m=+6.596704493" lastFinishedPulling="2025-05-13 00:01:59.193316957 +0000 UTC m=+15.709200050" observedRunningTime="2025-05-13 00:01:59.703143982 +0000 UTC m=+16.219027083" watchObservedRunningTime="2025-05-13 00:02:00.694248485 +0000 UTC m=+17.210131587" May 13 00:02:00.713035 systemd[1]: Started cri-containerd-4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d.scope - libcontainer container 4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d. May 13 00:02:00.739766 systemd[1]: cri-containerd-4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d.scope: Deactivated successfully. May 13 00:02:00.740221 containerd[1563]: time="2025-05-13T00:02:00.739984544Z" level=info msg="received exit event container_id:\"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" id:\"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" pid:3422 exited_at:{seconds:1747094520 nanos:739817301}" May 13 00:02:00.740221 containerd[1563]: time="2025-05-13T00:02:00.740114995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" id:\"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" pid:3422 exited_at:{seconds:1747094520 nanos:739817301}" May 13 00:02:00.745470 containerd[1563]: time="2025-05-13T00:02:00.745196129Z" level=info msg="StartContainer for \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" returns successfully" May 13 00:02:00.752602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d-rootfs.mount: Deactivated successfully. May 13 00:02:01.683697 containerd[1563]: time="2025-05-13T00:02:01.683666107Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:02:01.691896 containerd[1563]: time="2025-05-13T00:02:01.691773408Z" level=info msg="Container bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282: CDI devices from CRI Config.CDIDevices: []" May 13 00:02:01.693903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201417443.mount: Deactivated successfully. May 13 00:02:01.697305 containerd[1563]: time="2025-05-13T00:02:01.697287932Z" level=info msg="CreateContainer within sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\"" May 13 00:02:01.698272 containerd[1563]: time="2025-05-13T00:02:01.698087996Z" level=info msg="StartContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\"" May 13 00:02:01.698757 containerd[1563]: time="2025-05-13T00:02:01.698744882Z" level=info msg="connecting to shim bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282" address="unix:///run/containerd/s/50c4175efa5049a9864c4eb49d702893da77fc73680fa6fe902b3b865d0da9fc" protocol=ttrpc version=3 May 13 00:02:01.720042 systemd[1]: Started cri-containerd-bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282.scope - libcontainer container bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282. May 13 00:02:01.739799 containerd[1563]: time="2025-05-13T00:02:01.739779413Z" level=info msg="StartContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" returns successfully" May 13 00:02:01.826304 containerd[1563]: time="2025-05-13T00:02:01.826135744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" id:\"b7c48acb986210201be6d3d8a19f64acf34abde0bb90182653120538990efee6\" pid:3491 exited_at:{seconds:1747094521 nanos:825085067}" May 13 00:02:01.846850 kubelet[2835]: I0513 00:02:01.846826 2835 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:02:01.917316 systemd[1]: Created slice kubepods-burstable-pod5e20f0bd_a4a6_4231_95cd_dfc2154ad934.slice - libcontainer container kubepods-burstable-pod5e20f0bd_a4a6_4231_95cd_dfc2154ad934.slice. May 13 00:02:01.923255 systemd[1]: Created slice kubepods-burstable-pod8b912761_2944_4d84_a039_66acc79094ec.slice - libcontainer container kubepods-burstable-pod8b912761_2944_4d84_a039_66acc79094ec.slice. May 13 00:02:01.971248 kubelet[2835]: I0513 00:02:01.971223 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e20f0bd-a4a6-4231-95cd-dfc2154ad934-config-volume\") pod \"coredns-668d6bf9bc-4kkn2\" (UID: \"5e20f0bd-a4a6-4231-95cd-dfc2154ad934\") " pod="kube-system/coredns-668d6bf9bc-4kkn2" May 13 00:02:01.971248 kubelet[2835]: I0513 00:02:01.971250 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9rn\" (UniqueName: \"kubernetes.io/projected/5e20f0bd-a4a6-4231-95cd-dfc2154ad934-kube-api-access-rk9rn\") pod \"coredns-668d6bf9bc-4kkn2\" (UID: \"5e20f0bd-a4a6-4231-95cd-dfc2154ad934\") " pod="kube-system/coredns-668d6bf9bc-4kkn2" May 13 00:02:01.971371 kubelet[2835]: I0513 00:02:01.971267 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b912761-2944-4d84-a039-66acc79094ec-config-volume\") pod \"coredns-668d6bf9bc-jk44k\" (UID: \"8b912761-2944-4d84-a039-66acc79094ec\") " pod="kube-system/coredns-668d6bf9bc-jk44k" May 13 00:02:01.971371 kubelet[2835]: I0513 00:02:01.971282 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvwz\" (UniqueName: \"kubernetes.io/projected/8b912761-2944-4d84-a039-66acc79094ec-kube-api-access-bnvwz\") pod \"coredns-668d6bf9bc-jk44k\" (UID: \"8b912761-2944-4d84-a039-66acc79094ec\") " pod="kube-system/coredns-668d6bf9bc-jk44k" May 13 00:02:02.222522 containerd[1563]: time="2025-05-13T00:02:02.222062519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4kkn2,Uid:5e20f0bd-a4a6-4231-95cd-dfc2154ad934,Namespace:kube-system,Attempt:0,}" May 13 00:02:02.226268 containerd[1563]: time="2025-05-13T00:02:02.226256432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jk44k,Uid:8b912761-2944-4d84-a039-66acc79094ec,Namespace:kube-system,Attempt:0,}" May 13 00:02:03.954806 systemd-networkd[1472]: cilium_host: Link UP May 13 00:02:03.954895 systemd-networkd[1472]: cilium_net: Link UP May 13 00:02:03.956172 systemd-networkd[1472]: cilium_net: Gained carrier May 13 00:02:03.956273 systemd-networkd[1472]: cilium_host: Gained carrier May 13 00:02:04.071733 systemd-networkd[1472]: cilium_vxlan: Link UP May 13 00:02:04.071738 systemd-networkd[1472]: cilium_vxlan: Gained carrier May 13 00:02:04.121994 systemd-networkd[1472]: cilium_host: Gained IPv6LL May 13 00:02:04.413941 kernel: NET: Registered PF_ALG protocol family May 13 00:02:04.497015 systemd-networkd[1472]: cilium_net: Gained IPv6LL May 13 00:02:04.814426 systemd-networkd[1472]: lxc_health: Link UP May 13 00:02:04.819614 systemd-networkd[1472]: lxc_health: Gained carrier May 13 00:02:05.264972 kernel: eth0: renamed from tmpd58f7 May 13 00:02:05.273021 systemd-networkd[1472]: lxc82b09f9d36f7: Link UP May 13 00:02:05.273186 systemd-networkd[1472]: lxc82b09f9d36f7: Gained carrier May 13 00:02:05.291960 kernel: eth0: renamed from tmpe864b May 13 00:02:05.297396 systemd-networkd[1472]: lxcf78dd5be7d79: Link UP May 13 00:02:05.298012 systemd-networkd[1472]: lxcf78dd5be7d79: Gained carrier May 13 00:02:05.581275 kubelet[2835]: I0513 00:02:05.581192 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hz88k" podStartSLOduration=9.057897692 podStartE2EDuration="16.581179454s" podCreationTimestamp="2025-05-13 00:01:49 +0000 UTC" firstStartedPulling="2025-05-13 00:01:49.703517329 +0000 UTC m=+6.219400419" lastFinishedPulling="2025-05-13 00:01:57.226799088 +0000 UTC m=+13.742682181" observedRunningTime="2025-05-13 00:02:02.698553021 +0000 UTC m=+19.214436123" watchObservedRunningTime="2025-05-13 00:02:05.581179454 +0000 UTC m=+22.097062551" May 13 00:02:05.905066 systemd-networkd[1472]: cilium_vxlan: Gained IPv6LL May 13 00:02:06.161058 systemd-networkd[1472]: lxc_health: Gained IPv6LL May 13 00:02:06.481086 systemd-networkd[1472]: lxc82b09f9d36f7: Gained IPv6LL May 13 00:02:07.185059 systemd-networkd[1472]: lxcf78dd5be7d79: Gained IPv6LL May 13 00:02:07.902821 containerd[1563]: time="2025-05-13T00:02:07.902648820Z" level=info msg="connecting to shim e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9" address="unix:///run/containerd/s/bd89926f06f7a93ae0f1b0e0b179efbc79dbf2af16b4b2bcbc24940c1be06ee2" namespace=k8s.io protocol=ttrpc version=3 May 13 00:02:07.903403 containerd[1563]: time="2025-05-13T00:02:07.903364449Z" level=info msg="connecting to shim d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd" address="unix:///run/containerd/s/cd4d157088fb3f8a85a94c726f3f93e37e3f6a958e41253347d3cf7aefe42b1a" namespace=k8s.io protocol=ttrpc version=3 May 13 00:02:07.939014 systemd[1]: Started cri-containerd-d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd.scope - libcontainer container d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd. May 13 00:02:07.940851 systemd[1]: Started cri-containerd-e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9.scope - libcontainer container e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9. May 13 00:02:07.954049 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:02:07.962090 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:02:07.987985 containerd[1563]: time="2025-05-13T00:02:07.987830138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4kkn2,Uid:5e20f0bd-a4a6-4231-95cd-dfc2154ad934,Namespace:kube-system,Attempt:0,} returns sandbox id \"e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9\"" May 13 00:02:07.989910 containerd[1563]: time="2025-05-13T00:02:07.989767753Z" level=info msg="CreateContainer within sandbox \"e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:02:07.994263 containerd[1563]: time="2025-05-13T00:02:07.994233128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jk44k,Uid:8b912761-2944-4d84-a039-66acc79094ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd\"" May 13 00:02:07.997194 containerd[1563]: time="2025-05-13T00:02:07.996016787Z" level=info msg="CreateContainer within sandbox \"d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:02:08.006136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636218028.mount: Deactivated successfully. May 13 00:02:08.006194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545018649.mount: Deactivated successfully. May 13 00:02:08.006364 containerd[1563]: time="2025-05-13T00:02:08.006343918Z" level=info msg="Container 280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374: CDI devices from CRI Config.CDIDevices: []" May 13 00:02:08.007833 containerd[1563]: time="2025-05-13T00:02:08.007814614Z" level=info msg="Container 8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287: CDI devices from CRI Config.CDIDevices: []" May 13 00:02:08.011219 containerd[1563]: time="2025-05-13T00:02:08.011200400Z" level=info msg="CreateContainer within sandbox \"d58f7db7b51c6b0835161386612035f31b939812ee96e6e820dff583aee9e0cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287\"" May 13 00:02:08.011904 containerd[1563]: time="2025-05-13T00:02:08.011883362Z" level=info msg="StartContainer for \"8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287\"" May 13 00:02:08.012297 containerd[1563]: time="2025-05-13T00:02:08.012282300Z" level=info msg="connecting to shim 8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287" address="unix:///run/containerd/s/cd4d157088fb3f8a85a94c726f3f93e37e3f6a958e41253347d3cf7aefe42b1a" protocol=ttrpc version=3 May 13 00:02:08.015416 containerd[1563]: time="2025-05-13T00:02:08.015397797Z" level=info msg="CreateContainer within sandbox \"e864b1c09ec0e0167f7589538a2b82e2c1ea96a2e6fce47d785386fe68e8bfa9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374\"" May 13 00:02:08.016931 containerd[1563]: time="2025-05-13T00:02:08.016900292Z" level=info msg="StartContainer for \"280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374\"" May 13 00:02:08.027341 containerd[1563]: time="2025-05-13T00:02:08.026854557Z" level=info msg="connecting to shim 280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374" address="unix:///run/containerd/s/bd89926f06f7a93ae0f1b0e0b179efbc79dbf2af16b4b2bcbc24940c1be06ee2" protocol=ttrpc version=3 May 13 00:02:08.036031 systemd[1]: Started cri-containerd-8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287.scope - libcontainer container 8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287. May 13 00:02:08.039498 systemd[1]: Started cri-containerd-280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374.scope - libcontainer container 280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374. May 13 00:02:08.070288 containerd[1563]: time="2025-05-13T00:02:08.070192617Z" level=info msg="StartContainer for \"8959b27a2f133efd68cdfca617f1ea5f218555fa913512700e254db4636bb287\" returns successfully" May 13 00:02:08.070288 containerd[1563]: time="2025-05-13T00:02:08.070238316Z" level=info msg="StartContainer for \"280b214763678f4a89393667a2b54bb783086afe3ee6568549875478468e8374\" returns successfully" May 13 00:02:08.714945 kubelet[2835]: I0513 00:02:08.714797 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4kkn2" podStartSLOduration=19.714785831 podStartE2EDuration="19.714785831s" podCreationTimestamp="2025-05-13 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:02:08.706138953 +0000 UTC m=+25.222022056" watchObservedRunningTime="2025-05-13 00:02:08.714785831 +0000 UTC m=+25.230668924" May 13 00:02:08.724027 kubelet[2835]: I0513 00:02:08.723056 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jk44k" podStartSLOduration=19.723040375 podStartE2EDuration="19.723040375s" podCreationTimestamp="2025-05-13 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:02:08.722602245 +0000 UTC m=+25.238485346" watchObservedRunningTime="2025-05-13 00:02:08.723040375 +0000 UTC m=+25.238923466" May 13 00:02:08.862856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447828402.mount: Deactivated successfully. May 13 00:02:15.926886 kubelet[2835]: I0513 00:02:15.926705 2835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:02:50.986544 systemd[1]: Started sshd@7-139.178.70.99:22-147.75.109.163:55818.service - OpenSSH per-connection server daemon (147.75.109.163:55818). May 13 00:02:51.044599 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 55818 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:02:51.045905 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:02:51.049723 systemd-logind[1540]: New session 10 of user core. May 13 00:02:51.059064 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:02:51.497212 sshd[4150]: Connection closed by 147.75.109.163 port 55818 May 13 00:02:51.497700 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 13 00:02:51.501692 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. May 13 00:02:51.502085 systemd[1]: sshd@7-139.178.70.99:22-147.75.109.163:55818.service: Deactivated successfully. May 13 00:02:51.504518 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:02:51.505998 systemd-logind[1540]: Removed session 10. May 13 00:02:56.510656 systemd[1]: Started sshd@8-139.178.70.99:22-147.75.109.163:55826.service - OpenSSH per-connection server daemon (147.75.109.163:55826). May 13 00:02:56.599593 sshd[4163]: Accepted publickey for core from 147.75.109.163 port 55826 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:02:56.600399 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:02:56.603216 systemd-logind[1540]: New session 11 of user core. May 13 00:02:56.608088 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:02:56.792232 sshd[4165]: Connection closed by 147.75.109.163 port 55826 May 13 00:02:56.792723 sshd-session[4163]: pam_unix(sshd:session): session closed for user core May 13 00:02:56.795397 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. May 13 00:02:56.795494 systemd[1]: sshd@8-139.178.70.99:22-147.75.109.163:55826.service: Deactivated successfully. May 13 00:02:56.796881 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:02:56.798456 systemd-logind[1540]: Removed session 11. May 13 00:03:01.799991 systemd[1]: Started sshd@9-139.178.70.99:22-147.75.109.163:58012.service - OpenSSH per-connection server daemon (147.75.109.163:58012). May 13 00:03:01.836128 sshd[4178]: Accepted publickey for core from 147.75.109.163 port 58012 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:01.837009 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:01.840703 systemd-logind[1540]: New session 12 of user core. May 13 00:03:01.847117 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:03:01.931261 sshd[4180]: Connection closed by 147.75.109.163 port 58012 May 13 00:03:01.931605 sshd-session[4178]: pam_unix(sshd:session): session closed for user core May 13 00:03:01.933746 systemd[1]: sshd@9-139.178.70.99:22-147.75.109.163:58012.service: Deactivated successfully. May 13 00:03:01.934746 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:03:01.935188 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. May 13 00:03:01.935787 systemd-logind[1540]: Removed session 12. May 13 00:03:06.942896 systemd[1]: Started sshd@10-139.178.70.99:22-147.75.109.163:58022.service - OpenSSH per-connection server daemon (147.75.109.163:58022). May 13 00:03:06.983016 sshd[4194]: Accepted publickey for core from 147.75.109.163 port 58022 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:06.983867 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:06.986961 systemd-logind[1540]: New session 13 of user core. May 13 00:03:06.990064 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:03:07.090116 sshd[4196]: Connection closed by 147.75.109.163 port 58022 May 13 00:03:07.091174 sshd-session[4194]: pam_unix(sshd:session): session closed for user core May 13 00:03:07.099642 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. May 13 00:03:07.099734 systemd[1]: sshd@10-139.178.70.99:22-147.75.109.163:58022.service: Deactivated successfully. May 13 00:03:07.100982 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:03:07.103142 systemd[1]: Started sshd@11-139.178.70.99:22-147.75.109.163:58034.service - OpenSSH per-connection server daemon (147.75.109.163:58034). May 13 00:03:07.103819 systemd-logind[1540]: Removed session 13. May 13 00:03:07.139241 sshd[4207]: Accepted publickey for core from 147.75.109.163 port 58034 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:07.140014 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:07.143959 systemd-logind[1540]: New session 14 of user core. May 13 00:03:07.147006 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:03:07.282044 sshd[4210]: Connection closed by 147.75.109.163 port 58034 May 13 00:03:07.282628 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 13 00:03:07.291052 systemd[1]: sshd@11-139.178.70.99:22-147.75.109.163:58034.service: Deactivated successfully. May 13 00:03:07.294336 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:03:07.295381 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. May 13 00:03:07.299446 systemd[1]: Started sshd@12-139.178.70.99:22-147.75.109.163:58036.service - OpenSSH per-connection server daemon (147.75.109.163:58036). May 13 00:03:07.300700 systemd-logind[1540]: Removed session 14. May 13 00:03:07.351906 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 58036 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:07.352776 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:07.355863 systemd-logind[1540]: New session 15 of user core. May 13 00:03:07.363057 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:03:07.464997 sshd[4222]: Connection closed by 147.75.109.163 port 58036 May 13 00:03:07.464688 sshd-session[4219]: pam_unix(sshd:session): session closed for user core May 13 00:03:07.466319 systemd[1]: sshd@12-139.178.70.99:22-147.75.109.163:58036.service: Deactivated successfully. May 13 00:03:07.467533 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:03:07.468447 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. May 13 00:03:07.469209 systemd-logind[1540]: Removed session 15. May 13 00:03:12.474074 systemd[1]: Started sshd@13-139.178.70.99:22-147.75.109.163:46620.service - OpenSSH per-connection server daemon (147.75.109.163:46620). May 13 00:03:12.508088 sshd[4234]: Accepted publickey for core from 147.75.109.163 port 46620 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:12.508938 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:12.511589 systemd-logind[1540]: New session 16 of user core. May 13 00:03:12.515016 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:03:12.600857 sshd[4236]: Connection closed by 147.75.109.163 port 46620 May 13 00:03:12.601989 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 13 00:03:12.603915 systemd[1]: sshd@13-139.178.70.99:22-147.75.109.163:46620.service: Deactivated successfully. May 13 00:03:12.605091 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:03:12.605556 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. May 13 00:03:12.606200 systemd-logind[1540]: Removed session 16. May 13 00:03:17.612595 systemd[1]: Started sshd@14-139.178.70.99:22-147.75.109.163:46626.service - OpenSSH per-connection server daemon (147.75.109.163:46626). May 13 00:03:17.643744 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 46626 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:17.645009 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:17.648709 systemd-logind[1540]: New session 17 of user core. May 13 00:03:17.652038 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:03:17.740987 sshd[4249]: Connection closed by 147.75.109.163 port 46626 May 13 00:03:17.741337 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 13 00:03:17.750597 systemd[1]: sshd@14-139.178.70.99:22-147.75.109.163:46626.service: Deactivated successfully. May 13 00:03:17.751701 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:03:17.752497 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. May 13 00:03:17.753767 systemd[1]: Started sshd@15-139.178.70.99:22-147.75.109.163:46636.service - OpenSSH per-connection server daemon (147.75.109.163:46636). May 13 00:03:17.754728 systemd-logind[1540]: Removed session 17. May 13 00:03:17.783367 sshd[4260]: Accepted publickey for core from 147.75.109.163 port 46636 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:17.784111 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:17.786811 systemd-logind[1540]: New session 18 of user core. May 13 00:03:17.792032 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:03:18.545383 sshd[4263]: Connection closed by 147.75.109.163 port 46636 May 13 00:03:18.546102 sshd-session[4260]: pam_unix(sshd:session): session closed for user core May 13 00:03:18.556217 systemd[1]: sshd@15-139.178.70.99:22-147.75.109.163:46636.service: Deactivated successfully. May 13 00:03:18.557414 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:03:18.557918 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. May 13 00:03:18.559407 systemd[1]: Started sshd@16-139.178.70.99:22-147.75.109.163:56630.service - OpenSSH per-connection server daemon (147.75.109.163:56630). May 13 00:03:18.560289 systemd-logind[1540]: Removed session 18. May 13 00:03:18.601521 sshd[4272]: Accepted publickey for core from 147.75.109.163 port 56630 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:18.602314 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:18.605155 systemd-logind[1540]: New session 19 of user core. May 13 00:03:18.610014 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:03:19.378817 sshd[4275]: Connection closed by 147.75.109.163 port 56630 May 13 00:03:19.379475 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 13 00:03:19.386329 systemd[1]: sshd@16-139.178.70.99:22-147.75.109.163:56630.service: Deactivated successfully. May 13 00:03:19.388134 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:03:19.389420 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. May 13 00:03:19.392132 systemd[1]: Started sshd@17-139.178.70.99:22-147.75.109.163:56642.service - OpenSSH per-connection server daemon (147.75.109.163:56642). May 13 00:03:19.394811 systemd-logind[1540]: Removed session 19. May 13 00:03:19.430892 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 56642 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:19.431895 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:19.435711 systemd-logind[1540]: New session 20 of user core. May 13 00:03:19.439016 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:03:19.611015 sshd[4293]: Connection closed by 147.75.109.163 port 56642 May 13 00:03:19.612122 sshd-session[4290]: pam_unix(sshd:session): session closed for user core May 13 00:03:19.620592 systemd[1]: sshd@17-139.178.70.99:22-147.75.109.163:56642.service: Deactivated successfully. May 13 00:03:19.621676 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:03:19.622524 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. May 13 00:03:19.623987 systemd[1]: Started sshd@18-139.178.70.99:22-147.75.109.163:56654.service - OpenSSH per-connection server daemon (147.75.109.163:56654). May 13 00:03:19.624986 systemd-logind[1540]: Removed session 20. May 13 00:03:19.654434 sshd[4301]: Accepted publickey for core from 147.75.109.163 port 56654 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:19.656330 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:19.661161 systemd-logind[1540]: New session 21 of user core. May 13 00:03:19.664434 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:03:19.756399 sshd[4304]: Connection closed by 147.75.109.163 port 56654 May 13 00:03:19.756749 sshd-session[4301]: pam_unix(sshd:session): session closed for user core May 13 00:03:19.758747 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. May 13 00:03:19.759048 systemd[1]: sshd@18-139.178.70.99:22-147.75.109.163:56654.service: Deactivated successfully. May 13 00:03:19.760233 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:03:19.760809 systemd-logind[1540]: Removed session 21. May 13 00:03:24.766991 systemd[1]: Started sshd@19-139.178.70.99:22-147.75.109.163:56662.service - OpenSSH per-connection server daemon (147.75.109.163:56662). May 13 00:03:24.809600 sshd[4321]: Accepted publickey for core from 147.75.109.163 port 56662 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:24.810661 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:24.814050 systemd-logind[1540]: New session 22 of user core. May 13 00:03:24.821029 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:03:24.911431 sshd[4323]: Connection closed by 147.75.109.163 port 56662 May 13 00:03:24.911770 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 13 00:03:24.913753 systemd[1]: sshd@19-139.178.70.99:22-147.75.109.163:56662.service: Deactivated successfully. May 13 00:03:24.915088 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:03:24.915693 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. May 13 00:03:24.916248 systemd-logind[1540]: Removed session 22. May 13 00:03:29.920669 systemd[1]: Started sshd@20-139.178.70.99:22-147.75.109.163:38448.service - OpenSSH per-connection server daemon (147.75.109.163:38448). May 13 00:03:29.959967 sshd[4334]: Accepted publickey for core from 147.75.109.163 port 38448 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:29.960765 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:29.964502 systemd-logind[1540]: New session 23 of user core. May 13 00:03:29.970013 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:03:30.067459 sshd[4336]: Connection closed by 147.75.109.163 port 38448 May 13 00:03:30.067842 sshd-session[4334]: pam_unix(sshd:session): session closed for user core May 13 00:03:30.069691 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. May 13 00:03:30.069868 systemd[1]: sshd@20-139.178.70.99:22-147.75.109.163:38448.service: Deactivated successfully. May 13 00:03:30.071202 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:03:30.072615 systemd-logind[1540]: Removed session 23. May 13 00:03:35.077240 systemd[1]: Started sshd@21-139.178.70.99:22-147.75.109.163:38460.service - OpenSSH per-connection server daemon (147.75.109.163:38460). May 13 00:03:35.128088 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 38460 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:35.128931 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:35.132328 systemd-logind[1540]: New session 24 of user core. May 13 00:03:35.139089 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:03:35.282944 sshd[4350]: Connection closed by 147.75.109.163 port 38460 May 13 00:03:35.283391 sshd-session[4348]: pam_unix(sshd:session): session closed for user core May 13 00:03:35.285606 systemd[1]: sshd@21-139.178.70.99:22-147.75.109.163:38460.service: Deactivated successfully. May 13 00:03:35.286765 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:03:35.287323 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. May 13 00:03:35.288110 systemd-logind[1540]: Removed session 24. May 13 00:03:40.294710 systemd[1]: Started sshd@22-139.178.70.99:22-147.75.109.163:48160.service - OpenSSH per-connection server daemon (147.75.109.163:48160). May 13 00:03:40.338053 sshd[4362]: Accepted publickey for core from 147.75.109.163 port 48160 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:40.339098 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:40.342347 systemd-logind[1540]: New session 25 of user core. May 13 00:03:40.350087 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:03:40.439130 sshd[4364]: Connection closed by 147.75.109.163 port 48160 May 13 00:03:40.439723 sshd-session[4362]: pam_unix(sshd:session): session closed for user core May 13 00:03:40.448377 systemd[1]: sshd@22-139.178.70.99:22-147.75.109.163:48160.service: Deactivated successfully. May 13 00:03:40.449799 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:03:40.450576 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. May 13 00:03:40.451448 systemd[1]: Started sshd@23-139.178.70.99:22-147.75.109.163:48162.service - OpenSSH per-connection server daemon (147.75.109.163:48162). May 13 00:03:40.452551 systemd-logind[1540]: Removed session 25. May 13 00:03:40.481187 sshd[4374]: Accepted publickey for core from 147.75.109.163 port 48162 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:40.481916 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:40.485498 systemd-logind[1540]: New session 26 of user core. May 13 00:03:40.493011 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 00:03:41.865375 containerd[1563]: time="2025-05-13T00:03:41.865285162Z" level=info msg="StopContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" with timeout 30 (s)" May 13 00:03:41.866880 containerd[1563]: time="2025-05-13T00:03:41.866735676Z" level=info msg="Stop container \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" with signal terminated" May 13 00:03:41.876358 containerd[1563]: time="2025-05-13T00:03:41.876324324Z" level=info msg="received exit event container_id:\"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" id:\"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" pid:3351 exited_at:{seconds:1747094621 nanos:876098060}" May 13 00:03:41.876344 systemd[1]: cri-containerd-0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5.scope: Deactivated successfully. May 13 00:03:41.878233 containerd[1563]: time="2025-05-13T00:03:41.876580729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" id:\"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" pid:3351 exited_at:{seconds:1747094621 nanos:876098060}" May 13 00:03:41.890286 containerd[1563]: time="2025-05-13T00:03:41.890244596Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:03:41.890390 containerd[1563]: time="2025-05-13T00:03:41.890320118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" id:\"f439a3935b0b4c92e200caed843beedbc96833c02d8f55841ab27b0f5e8840ea\" pid:4404 exited_at:{seconds:1747094621 nanos:889843579}" May 13 00:03:41.891692 containerd[1563]: time="2025-05-13T00:03:41.891661059Z" level=info msg="StopContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" with timeout 2 (s)" May 13 00:03:41.891898 containerd[1563]: time="2025-05-13T00:03:41.891879973Z" level=info msg="Stop container \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" with signal terminated" May 13 00:03:41.896109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5-rootfs.mount: Deactivated successfully. May 13 00:03:41.901380 systemd-networkd[1472]: lxc_health: Link DOWN May 13 00:03:41.901873 systemd-networkd[1472]: lxc_health: Lost carrier May 13 00:03:41.912980 containerd[1563]: time="2025-05-13T00:03:41.912256483Z" level=info msg="StopContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" returns successfully" May 13 00:03:41.915268 systemd[1]: cri-containerd-bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282.scope: Deactivated successfully. May 13 00:03:41.915454 systemd[1]: cri-containerd-bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282.scope: Consumed 4.382s CPU time, 193.4M memory peak, 68.2M read from disk, 13.3M written to disk. May 13 00:03:41.930949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282-rootfs.mount: Deactivated successfully. May 13 00:03:41.931442 containerd[1563]: time="2025-05-13T00:03:41.917220604Z" level=info msg="received exit event container_id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" pid:3460 exited_at:{seconds:1747094621 nanos:916982851}" May 13 00:03:41.931442 containerd[1563]: time="2025-05-13T00:03:41.917288051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" id:\"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" pid:3460 exited_at:{seconds:1747094621 nanos:916982851}" May 13 00:03:41.957646 containerd[1563]: time="2025-05-13T00:03:41.957613872Z" level=info msg="StopPodSandbox for \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\"" May 13 00:03:41.966679 containerd[1563]: time="2025-05-13T00:03:41.966656143Z" level=info msg="Container to stop \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.971581 systemd[1]: cri-containerd-570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b.scope: Deactivated successfully. May 13 00:03:41.973144 containerd[1563]: time="2025-05-13T00:03:41.972144598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" id:\"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" pid:3061 exit_status:137 exited_at:{seconds:1747094621 nanos:971721023}" May 13 00:03:41.976108 containerd[1563]: time="2025-05-13T00:03:41.976084218Z" level=info msg="StopContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" returns successfully" May 13 00:03:41.976569 containerd[1563]: time="2025-05-13T00:03:41.976552595Z" level=info msg="StopPodSandbox for \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\"" May 13 00:03:41.976778 containerd[1563]: time="2025-05-13T00:03:41.976596065Z" level=info msg="Container to stop \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.976778 containerd[1563]: time="2025-05-13T00:03:41.976696573Z" level=info msg="Container to stop \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.976778 containerd[1563]: time="2025-05-13T00:03:41.976704178Z" level=info msg="Container to stop \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.976778 containerd[1563]: time="2025-05-13T00:03:41.976709724Z" level=info msg="Container to stop \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.976778 containerd[1563]: time="2025-05-13T00:03:41.976714693Z" level=info msg="Container to stop \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:03:41.983436 systemd[1]: cri-containerd-d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e.scope: Deactivated successfully. May 13 00:03:42.003519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b-rootfs.mount: Deactivated successfully. May 13 00:03:42.008556 containerd[1563]: time="2025-05-13T00:03:42.007084770Z" level=info msg="shim disconnected" id=570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b namespace=k8s.io May 13 00:03:42.008556 containerd[1563]: time="2025-05-13T00:03:42.008348251Z" level=warning msg="cleaning up after shim disconnected" id=570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b namespace=k8s.io May 13 00:03:42.007885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b-shm.mount: Deactivated successfully. May 13 00:03:42.009894 containerd[1563]: time="2025-05-13T00:03:42.008363977Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:03:42.010389 containerd[1563]: time="2025-05-13T00:03:42.008593029Z" level=info msg="received exit event sandbox_id:\"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" exit_status:137 exited_at:{seconds:1747094621 nanos:971721023}" May 13 00:03:42.012947 containerd[1563]: time="2025-05-13T00:03:42.012902954Z" level=info msg="TearDown network for sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" successfully" May 13 00:03:42.012947 containerd[1563]: time="2025-05-13T00:03:42.012938589Z" level=info msg="StopPodSandbox for \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" returns successfully" May 13 00:03:42.016721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e-rootfs.mount: Deactivated successfully. May 13 00:03:42.018746 containerd[1563]: time="2025-05-13T00:03:42.018681235Z" level=info msg="shim disconnected" id=d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e namespace=k8s.io May 13 00:03:42.018746 containerd[1563]: time="2025-05-13T00:03:42.018703041Z" level=warning msg="cleaning up after shim disconnected" id=d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e namespace=k8s.io May 13 00:03:42.018746 containerd[1563]: time="2025-05-13T00:03:42.018711977Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:03:42.033438 containerd[1563]: time="2025-05-13T00:03:42.033214780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" id:\"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" pid:2995 exit_status:137 exited_at:{seconds:1747094621 nanos:984463737}" May 13 00:03:42.033902 containerd[1563]: time="2025-05-13T00:03:42.033859872Z" level=info msg="received exit event sandbox_id:\"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" exit_status:137 exited_at:{seconds:1747094621 nanos:984463737}" May 13 00:03:42.034134 containerd[1563]: time="2025-05-13T00:03:42.034117456Z" level=info msg="TearDown network for sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" successfully" May 13 00:03:42.034134 containerd[1563]: time="2025-05-13T00:03:42.034130419Z" level=info msg="StopPodSandbox for \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" returns successfully" May 13 00:03:42.112666 kubelet[2835]: I0513 00:03:42.112545 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cni-path\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122485 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95574585-119d-4c26-add6-806627db6d54-clustermesh-secrets\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122533 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-net\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122565 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-hubble-tls\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122589 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-cgroup\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122609 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-kernel\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.123619 kubelet[2835]: I0513 00:03:42.122638 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c28b31b5-51a1-415e-a1b3-96b7cab69362-cilium-config-path\") pod \"c28b31b5-51a1-415e-a1b3-96b7cab69362\" (UID: \"c28b31b5-51a1-415e-a1b3-96b7cab69362\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122651 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-hostproc\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122665 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-etc-cni-netd\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122678 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65x9r\" (UniqueName: \"kubernetes.io/projected/c28b31b5-51a1-415e-a1b3-96b7cab69362-kube-api-access-65x9r\") pod \"c28b31b5-51a1-415e-a1b3-96b7cab69362\" (UID: \"c28b31b5-51a1-415e-a1b3-96b7cab69362\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122689 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-lib-modules\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122699 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-xtables-lock\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124337 kubelet[2835]: I0513 00:03:42.122713 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-bpf-maps\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124489 kubelet[2835]: I0513 00:03:42.122726 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-run\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124489 kubelet[2835]: I0513 00:03:42.122739 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95574585-119d-4c26-add6-806627db6d54-cilium-config-path\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.124489 kubelet[2835]: I0513 00:03:42.122756 2835 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhkm7\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-kube-api-access-zhkm7\") pod \"95574585-119d-4c26-add6-806627db6d54\" (UID: \"95574585-119d-4c26-add6-806627db6d54\") " May 13 00:03:42.127086 kubelet[2835]: I0513 00:03:42.127056 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cni-path" (OuterVolumeSpecName: "cni-path") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.128662 kubelet[2835]: I0513 00:03:42.128645 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-kube-api-access-zhkm7" (OuterVolumeSpecName: "kube-api-access-zhkm7") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "kube-api-access-zhkm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:03:42.128737 kubelet[2835]: I0513 00:03:42.128722 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95574585-119d-4c26-add6-806627db6d54-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:03:42.128778 kubelet[2835]: I0513 00:03:42.125995 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.129978 kubelet[2835]: I0513 00:03:42.129960 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:03:42.130028 kubelet[2835]: I0513 00:03:42.129984 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.130028 kubelet[2835]: I0513 00:03:42.129998 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.130028 kubelet[2835]: I0513 00:03:42.130008 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.130028 kubelet[2835]: I0513 00:03:42.130017 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.131256 kubelet[2835]: I0513 00:03:42.131235 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28b31b5-51a1-415e-a1b3-96b7cab69362-kube-api-access-65x9r" (OuterVolumeSpecName: "kube-api-access-65x9r") pod "c28b31b5-51a1-415e-a1b3-96b7cab69362" (UID: "c28b31b5-51a1-415e-a1b3-96b7cab69362"). InnerVolumeSpecName "kube-api-access-65x9r". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:03:42.131297 kubelet[2835]: I0513 00:03:42.131257 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.131297 kubelet[2835]: I0513 00:03:42.131272 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.132227 kubelet[2835]: I0513 00:03:42.132172 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95574585-119d-4c26-add6-806627db6d54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:03:42.132227 kubelet[2835]: I0513 00:03:42.132204 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-hostproc" (OuterVolumeSpecName: "hostproc") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.132227 kubelet[2835]: I0513 00:03:42.132216 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "95574585-119d-4c26-add6-806627db6d54" (UID: "95574585-119d-4c26-add6-806627db6d54"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:03:42.132659 kubelet[2835]: I0513 00:03:42.132635 2835 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28b31b5-51a1-415e-a1b3-96b7cab69362-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c28b31b5-51a1-415e-a1b3-96b7cab69362" (UID: "c28b31b5-51a1-415e-a1b3-96b7cab69362"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:03:42.223633 kubelet[2835]: I0513 00:03:42.223607 2835 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95574585-119d-4c26-add6-806627db6d54-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223633 kubelet[2835]: I0513 00:03:42.223630 2835 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223633 kubelet[2835]: I0513 00:03:42.223637 2835 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223633 kubelet[2835]: I0513 00:03:42.223641 2835 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223645 2835 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223650 2835 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223655 2835 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c28b31b5-51a1-415e-a1b3-96b7cab69362-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223659 2835 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223664 2835 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223668 2835 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-65x9r\" (UniqueName: \"kubernetes.io/projected/c28b31b5-51a1-415e-a1b3-96b7cab69362-kube-api-access-65x9r\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223673 2835 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223785 kubelet[2835]: I0513 00:03:42.223679 2835 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223942 kubelet[2835]: I0513 00:03:42.223683 2835 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223942 kubelet[2835]: I0513 00:03:42.223687 2835 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95574585-119d-4c26-add6-806627db6d54-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223942 kubelet[2835]: I0513 00:03:42.223691 2835 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95574585-119d-4c26-add6-806627db6d54-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.223942 kubelet[2835]: I0513 00:03:42.223697 2835 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhkm7\" (UniqueName: \"kubernetes.io/projected/95574585-119d-4c26-add6-806627db6d54-kube-api-access-zhkm7\") on node \"localhost\" DevicePath \"\"" May 13 00:03:42.861840 systemd[1]: Removed slice kubepods-besteffort-podc28b31b5_51a1_415e_a1b3_96b7cab69362.slice - libcontainer container kubepods-besteffort-podc28b31b5_51a1_415e_a1b3_96b7cab69362.slice. May 13 00:03:42.888319 kubelet[2835]: I0513 00:03:42.888191 2835 scope.go:117] "RemoveContainer" containerID="0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5" May 13 00:03:42.896197 systemd[1]: var-lib-kubelet-pods-c28b31b5\x2d51a1\x2d415e\x2da1b3\x2d96b7cab69362-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65x9r.mount: Deactivated successfully. May 13 00:03:42.897159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e-shm.mount: Deactivated successfully. May 13 00:03:42.897249 systemd[1]: var-lib-kubelet-pods-95574585\x2d119d\x2d4c26\x2dadd6\x2d806627db6d54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhkm7.mount: Deactivated successfully. May 13 00:03:42.897315 systemd[1]: var-lib-kubelet-pods-95574585\x2d119d\x2d4c26\x2dadd6\x2d806627db6d54-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:03:42.897372 systemd[1]: var-lib-kubelet-pods-95574585\x2d119d\x2d4c26\x2dadd6\x2d806627db6d54-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:03:42.902275 containerd[1563]: time="2025-05-13T00:03:42.902063291Z" level=info msg="RemoveContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\"" May 13 00:03:42.909723 systemd[1]: Removed slice kubepods-burstable-pod95574585_119d_4c26_add6_806627db6d54.slice - libcontainer container kubepods-burstable-pod95574585_119d_4c26_add6_806627db6d54.slice. May 13 00:03:42.909996 systemd[1]: kubepods-burstable-pod95574585_119d_4c26_add6_806627db6d54.slice: Consumed 4.440s CPU time, 194.3M memory peak, 68.4M read from disk, 13.3M written to disk. May 13 00:03:42.915449 containerd[1563]: time="2025-05-13T00:03:42.915428601Z" level=info msg="RemoveContainer for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" returns successfully" May 13 00:03:42.916115 kubelet[2835]: I0513 00:03:42.916096 2835 scope.go:117] "RemoveContainer" containerID="0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5" May 13 00:03:42.926160 containerd[1563]: time="2025-05-13T00:03:42.916406791Z" level=error msg="ContainerStatus for \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\": not found" May 13 00:03:42.926581 kubelet[2835]: E0513 00:03:42.926462 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\": not found" containerID="0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5" May 13 00:03:42.996635 kubelet[2835]: I0513 00:03:42.930509 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5"} err="failed to get container status \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0274ccad5cf5bbe6538812f143f038c5f43cf585d3d966441313aeca170a6bd5\": not found" May 13 00:03:42.996635 kubelet[2835]: I0513 00:03:42.996550 2835 scope.go:117] "RemoveContainer" containerID="bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282" May 13 00:03:42.998822 containerd[1563]: time="2025-05-13T00:03:42.998618395Z" level=info msg="RemoveContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\"" May 13 00:03:43.002124 containerd[1563]: time="2025-05-13T00:03:43.002104516Z" level=info msg="RemoveContainer for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" returns successfully" May 13 00:03:43.002456 kubelet[2835]: I0513 00:03:43.002358 2835 scope.go:117] "RemoveContainer" containerID="4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d" May 13 00:03:43.003473 containerd[1563]: time="2025-05-13T00:03:43.003453272Z" level=info msg="RemoveContainer for \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\"" May 13 00:03:43.005808 containerd[1563]: time="2025-05-13T00:03:43.005780863Z" level=info msg="RemoveContainer for \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" returns successfully" May 13 00:03:43.005907 kubelet[2835]: I0513 00:03:43.005888 2835 scope.go:117] "RemoveContainer" containerID="bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746" May 13 00:03:43.007735 containerd[1563]: time="2025-05-13T00:03:43.007715294Z" level=info msg="RemoveContainer for \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\"" May 13 00:03:43.009717 containerd[1563]: time="2025-05-13T00:03:43.009696453Z" level=info msg="RemoveContainer for \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" returns successfully" May 13 00:03:43.009816 kubelet[2835]: I0513 00:03:43.009794 2835 scope.go:117] "RemoveContainer" containerID="5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649" May 13 00:03:43.011010 containerd[1563]: time="2025-05-13T00:03:43.010988677Z" level=info msg="RemoveContainer for \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\"" May 13 00:03:43.012553 containerd[1563]: time="2025-05-13T00:03:43.012534492Z" level=info msg="RemoveContainer for \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" returns successfully" May 13 00:03:43.012654 kubelet[2835]: I0513 00:03:43.012637 2835 scope.go:117] "RemoveContainer" containerID="c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0" May 13 00:03:43.013615 containerd[1563]: time="2025-05-13T00:03:43.013602621Z" level=info msg="RemoveContainer for \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\"" May 13 00:03:43.015300 containerd[1563]: time="2025-05-13T00:03:43.015285470Z" level=info msg="RemoveContainer for \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" returns successfully" May 13 00:03:43.015510 kubelet[2835]: I0513 00:03:43.015494 2835 scope.go:117] "RemoveContainer" containerID="bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282" May 13 00:03:43.015645 containerd[1563]: time="2025-05-13T00:03:43.015597641Z" level=error msg="ContainerStatus for \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\": not found" May 13 00:03:43.015702 kubelet[2835]: E0513 00:03:43.015682 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\": not found" containerID="bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282" May 13 00:03:43.015702 kubelet[2835]: I0513 00:03:43.015726 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282"} err="failed to get container status \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd2b5d8938bdaf6874dbdeb85f32c5562bc3f8fcb9d8aae4473325df4ec3c282\": not found" May 13 00:03:43.015702 kubelet[2835]: I0513 00:03:43.015741 2835 scope.go:117] "RemoveContainer" containerID="4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d" May 13 00:03:43.016011 containerd[1563]: time="2025-05-13T00:03:43.015953610Z" level=error msg="ContainerStatus for \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\": not found" May 13 00:03:43.016054 kubelet[2835]: E0513 00:03:43.016026 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\": not found" containerID="4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d" May 13 00:03:43.016054 kubelet[2835]: I0513 00:03:43.016038 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d"} err="failed to get container status \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4484d0916a5729544de249bf459244bf23fe1fe702a144c47c28948ac49cd17d\": not found" May 13 00:03:43.016054 kubelet[2835]: I0513 00:03:43.016049 2835 scope.go:117] "RemoveContainer" containerID="bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746" May 13 00:03:43.016336 containerd[1563]: time="2025-05-13T00:03:43.016290680Z" level=error msg="ContainerStatus for \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\": not found" May 13 00:03:43.016395 kubelet[2835]: E0513 00:03:43.016363 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\": not found" containerID="bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746" May 13 00:03:43.016395 kubelet[2835]: I0513 00:03:43.016373 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746"} err="failed to get container status \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\": rpc error: code = NotFound desc = an error occurred when try to find container \"bac17dc5f47788c2fa04214074ec35a2825db729edd3f534edb9674b56f6e746\": not found" May 13 00:03:43.016395 kubelet[2835]: I0513 00:03:43.016381 2835 scope.go:117] "RemoveContainer" containerID="5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649" May 13 00:03:43.016610 kubelet[2835]: E0513 00:03:43.016556 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\": not found" containerID="5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649" May 13 00:03:43.016610 kubelet[2835]: I0513 00:03:43.016565 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649"} err="failed to get container status \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\": not found" May 13 00:03:43.016610 kubelet[2835]: I0513 00:03:43.016575 2835 scope.go:117] "RemoveContainer" containerID="c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0" May 13 00:03:43.016697 containerd[1563]: time="2025-05-13T00:03:43.016463104Z" level=error msg="ContainerStatus for \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cf303781266ffcde63cc6fd9cdc078b96cf9278878bb0f72be616b3b763d649\": not found" May 13 00:03:43.016917 containerd[1563]: time="2025-05-13T00:03:43.016856818Z" level=error msg="ContainerStatus for \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\": not found" May 13 00:03:43.016982 kubelet[2835]: E0513 00:03:43.016959 2835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\": not found" containerID="c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0" May 13 00:03:43.016982 kubelet[2835]: I0513 00:03:43.016971 2835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0"} err="failed to get container status \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2fc21c71dbb748b485cb8ae614b05341b110d00b14a4e7e801a6bffc87077d0\": not found" May 13 00:03:43.548164 kubelet[2835]: I0513 00:03:43.548137 2835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95574585-119d-4c26-add6-806627db6d54" path="/var/lib/kubelet/pods/95574585-119d-4c26-add6-806627db6d54/volumes" May 13 00:03:43.548587 kubelet[2835]: I0513 00:03:43.548571 2835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28b31b5-51a1-415e-a1b3-96b7cab69362" path="/var/lib/kubelet/pods/c28b31b5-51a1-415e-a1b3-96b7cab69362/volumes" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.596670862Z" level=info msg="StopPodSandbox for \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\"" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.596802928Z" level=info msg="TearDown network for sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" successfully" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.596815614Z" level=info msg="StopPodSandbox for \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" returns successfully" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.597078513Z" level=info msg="RemovePodSandbox for \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\"" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.597094420Z" level=info msg="Forcibly stopping sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\"" May 13 00:03:43.597650 containerd[1563]: time="2025-05-13T00:03:43.597161632Z" level=info msg="TearDown network for sandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" successfully" May 13 00:03:43.598890 containerd[1563]: time="2025-05-13T00:03:43.598864130Z" level=info msg="Ensure that sandbox d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e in task-service has been cleanup successfully" May 13 00:03:43.608684 containerd[1563]: time="2025-05-13T00:03:43.608655618Z" level=info msg="RemovePodSandbox \"d0fb933b03a964a3c167b962e23579e8b99b9c8dcbd1bea7ae43da1b4b59135e\" returns successfully" May 13 00:03:43.609177 containerd[1563]: time="2025-05-13T00:03:43.608968185Z" level=info msg="StopPodSandbox for \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\"" May 13 00:03:43.609177 containerd[1563]: time="2025-05-13T00:03:43.609050406Z" level=info msg="TearDown network for sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" successfully" May 13 00:03:43.609177 containerd[1563]: time="2025-05-13T00:03:43.609059903Z" level=info msg="StopPodSandbox for \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" returns successfully" May 13 00:03:43.610285 containerd[1563]: time="2025-05-13T00:03:43.609886351Z" level=info msg="RemovePodSandbox for \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\"" May 13 00:03:43.610285 containerd[1563]: time="2025-05-13T00:03:43.609939918Z" level=info msg="Forcibly stopping sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\"" May 13 00:03:43.610285 containerd[1563]: time="2025-05-13T00:03:43.610004076Z" level=info msg="TearDown network for sandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" successfully" May 13 00:03:43.611242 containerd[1563]: time="2025-05-13T00:03:43.611008804Z" level=info msg="Ensure that sandbox 570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b in task-service has been cleanup successfully" May 13 00:03:43.623527 containerd[1563]: time="2025-05-13T00:03:43.623507343Z" level=info msg="RemovePodSandbox \"570c550b7bd7771583902c78b91b58a9004813fba6b34ea845b2c1deafd8f07b\" returns successfully" May 13 00:03:43.624391 kubelet[2835]: E0513 00:03:43.624369 2835 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:03:43.794055 sshd[4377]: Connection closed by 147.75.109.163 port 48162 May 13 00:03:43.793990 sshd-session[4374]: pam_unix(sshd:session): session closed for user core May 13 00:03:43.799986 systemd[1]: sshd@23-139.178.70.99:22-147.75.109.163:48162.service: Deactivated successfully. May 13 00:03:43.801168 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:03:43.801603 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. May 13 00:03:43.803272 systemd[1]: Started sshd@24-139.178.70.99:22-147.75.109.163:48176.service - OpenSSH per-connection server daemon (147.75.109.163:48176). May 13 00:03:43.804113 systemd-logind[1540]: Removed session 26. May 13 00:03:43.833750 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 48176 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:43.834653 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:43.837460 systemd-logind[1540]: New session 27 of user core. May 13 00:03:43.845010 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 00:03:44.190751 sshd[4530]: Connection closed by 147.75.109.163 port 48176 May 13 00:03:44.190974 sshd-session[4527]: pam_unix(sshd:session): session closed for user core May 13 00:03:44.201712 systemd[1]: sshd@24-139.178.70.99:22-147.75.109.163:48176.service: Deactivated successfully. May 13 00:03:44.202042 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. May 13 00:03:44.203508 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:03:44.208149 systemd[1]: Started sshd@25-139.178.70.99:22-147.75.109.163:48192.service - OpenSSH per-connection server daemon (147.75.109.163:48192). May 13 00:03:44.209711 systemd-logind[1540]: Removed session 27. May 13 00:03:44.223851 kubelet[2835]: I0513 00:03:44.213386 2835 memory_manager.go:355] "RemoveStaleState removing state" podUID="c28b31b5-51a1-415e-a1b3-96b7cab69362" containerName="cilium-operator" May 13 00:03:44.223851 kubelet[2835]: I0513 00:03:44.223855 2835 memory_manager.go:355] "RemoveStaleState removing state" podUID="95574585-119d-4c26-add6-806627db6d54" containerName="cilium-agent" May 13 00:03:44.257568 sshd[4539]: Accepted publickey for core from 147.75.109.163 port 48192 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:44.259357 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:44.266296 systemd-logind[1540]: New session 28 of user core. May 13 00:03:44.273098 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 00:03:44.284248 systemd[1]: Created slice kubepods-burstable-podbce5d829_9bc0_49c3_b88d_ee42eb798d89.slice - libcontainer container kubepods-burstable-podbce5d829_9bc0_49c3_b88d_ee42eb798d89.slice. May 13 00:03:44.327331 sshd[4542]: Connection closed by 147.75.109.163 port 48192 May 13 00:03:44.327728 sshd-session[4539]: pam_unix(sshd:session): session closed for user core May 13 00:03:44.340864 systemd[1]: sshd@25-139.178.70.99:22-147.75.109.163:48192.service: Deactivated successfully. May 13 00:03:44.342191 systemd[1]: session-28.scope: Deactivated successfully. May 13 00:03:44.342739 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. May 13 00:03:44.344767 systemd[1]: Started sshd@26-139.178.70.99:22-147.75.109.163:48202.service - OpenSSH per-connection server daemon (147.75.109.163:48202). May 13 00:03:44.345646 kubelet[2835]: I0513 00:03:44.345594 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bce5d829-9bc0-49c3-b88d-ee42eb798d89-cilium-ipsec-secrets\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345646 kubelet[2835]: I0513 00:03:44.345623 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-cilium-run\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345646 kubelet[2835]: I0513 00:03:44.345635 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-lib-modules\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345646 kubelet[2835]: I0513 00:03:44.345644 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-xtables-lock\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345654 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-bpf-maps\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345663 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bce5d829-9bc0-49c3-b88d-ee42eb798d89-clustermesh-secrets\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345677 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-host-proc-sys-kernel\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345690 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bce5d829-9bc0-49c3-b88d-ee42eb798d89-hubble-tls\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345699 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bce5d829-9bc0-49c3-b88d-ee42eb798d89-cilium-config-path\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.345755 kubelet[2835]: I0513 00:03:44.345724 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-host-proc-sys-net\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346018 kubelet[2835]: I0513 00:03:44.345734 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-etc-cni-netd\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346018 kubelet[2835]: I0513 00:03:44.345742 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-hostproc\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346018 kubelet[2835]: I0513 00:03:44.345753 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-cilium-cgroup\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346018 kubelet[2835]: I0513 00:03:44.345764 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bce5d829-9bc0-49c3-b88d-ee42eb798d89-cni-path\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346018 kubelet[2835]: I0513 00:03:44.345784 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnfxm\" (UniqueName: \"kubernetes.io/projected/bce5d829-9bc0-49c3-b88d-ee42eb798d89-kube-api-access-mnfxm\") pod \"cilium-bqkw4\" (UID: \"bce5d829-9bc0-49c3-b88d-ee42eb798d89\") " pod="kube-system/cilium-bqkw4" May 13 00:03:44.346310 systemd-logind[1540]: Removed session 28. May 13 00:03:44.381159 sshd[4548]: Accepted publickey for core from 147.75.109.163 port 48202 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:44.381902 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:44.385064 systemd-logind[1540]: New session 29 of user core. May 13 00:03:44.388006 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 00:03:44.588545 containerd[1563]: time="2025-05-13T00:03:44.588460874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqkw4,Uid:bce5d829-9bc0-49c3-b88d-ee42eb798d89,Namespace:kube-system,Attempt:0,}" May 13 00:03:44.598362 containerd[1563]: time="2025-05-13T00:03:44.598059366Z" level=info msg="connecting to shim e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" namespace=k8s.io protocol=ttrpc version=3 May 13 00:03:44.618006 systemd[1]: Started cri-containerd-e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529.scope - libcontainer container e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529. May 13 00:03:44.633150 containerd[1563]: time="2025-05-13T00:03:44.633126439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqkw4,Uid:bce5d829-9bc0-49c3-b88d-ee42eb798d89,Namespace:kube-system,Attempt:0,} returns sandbox id \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\"" May 13 00:03:44.635943 containerd[1563]: time="2025-05-13T00:03:44.635873100Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:03:44.638432 containerd[1563]: time="2025-05-13T00:03:44.638410679Z" level=info msg="Container ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494: CDI devices from CRI Config.CDIDevices: []" May 13 00:03:44.641615 containerd[1563]: time="2025-05-13T00:03:44.641592697Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\"" May 13 00:03:44.642182 containerd[1563]: time="2025-05-13T00:03:44.642020206Z" level=info msg="StartContainer for \"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\"" May 13 00:03:44.642756 containerd[1563]: time="2025-05-13T00:03:44.642720772Z" level=info msg="connecting to shim ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" protocol=ttrpc version=3 May 13 00:03:44.658071 systemd[1]: Started cri-containerd-ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494.scope - libcontainer container ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494. May 13 00:03:44.675339 containerd[1563]: time="2025-05-13T00:03:44.675271758Z" level=info msg="StartContainer for \"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\" returns successfully" May 13 00:03:44.691452 systemd[1]: cri-containerd-ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494.scope: Deactivated successfully. May 13 00:03:44.691675 systemd[1]: cri-containerd-ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494.scope: Consumed 14ms CPU time, 9.5M memory peak, 3.1M read from disk. May 13 00:03:44.693785 containerd[1563]: time="2025-05-13T00:03:44.693680431Z" level=info msg="received exit event container_id:\"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\" id:\"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\" pid:4621 exited_at:{seconds:1747094624 nanos:693535463}" May 13 00:03:44.693785 containerd[1563]: time="2025-05-13T00:03:44.693752375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\" id:\"ebdecc5686fba54c64c017824042e7d5a6ff513dc88059eb5440dd93ab41e494\" pid:4621 exited_at:{seconds:1747094624 nanos:693535463}" May 13 00:03:44.910716 containerd[1563]: time="2025-05-13T00:03:44.910633916Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:03:44.913902 containerd[1563]: time="2025-05-13T00:03:44.913861166Z" level=info msg="Container 1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73: CDI devices from CRI Config.CDIDevices: []" May 13 00:03:44.916115 containerd[1563]: time="2025-05-13T00:03:44.916096351Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\"" May 13 00:03:44.916789 containerd[1563]: time="2025-05-13T00:03:44.916389996Z" level=info msg="StartContainer for \"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\"" May 13 00:03:44.916830 containerd[1563]: time="2025-05-13T00:03:44.916810526Z" level=info msg="connecting to shim 1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" protocol=ttrpc version=3 May 13 00:03:44.934097 systemd[1]: Started cri-containerd-1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73.scope - libcontainer container 1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73. May 13 00:03:44.951581 containerd[1563]: time="2025-05-13T00:03:44.951269533Z" level=info msg="StartContainer for \"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\" returns successfully" May 13 00:03:44.963967 systemd[1]: cri-containerd-1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73.scope: Deactivated successfully. May 13 00:03:44.964203 systemd[1]: cri-containerd-1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73.scope: Consumed 12ms CPU time, 7.4M memory peak, 2M read from disk. May 13 00:03:44.964377 containerd[1563]: time="2025-05-13T00:03:44.964353967Z" level=info msg="received exit event container_id:\"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\" id:\"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\" pid:4666 exited_at:{seconds:1747094624 nanos:964240943}" May 13 00:03:44.964883 containerd[1563]: time="2025-05-13T00:03:44.964840764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\" id:\"1c31508f5d90f0b5b711e5316ed08f32aa9975fdd090fd5b989f6f1ac9a3ff73\" pid:4666 exited_at:{seconds:1747094624 nanos:964240943}" May 13 00:03:45.914142 containerd[1563]: time="2025-05-13T00:03:45.914078816Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:03:45.921493 containerd[1563]: time="2025-05-13T00:03:45.921454514Z" level=info msg="Container c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775: CDI devices from CRI Config.CDIDevices: []" May 13 00:03:45.928585 containerd[1563]: time="2025-05-13T00:03:45.928449197Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\"" May 13 00:03:45.930020 containerd[1563]: time="2025-05-13T00:03:45.929957264Z" level=info msg="StartContainer for \"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\"" May 13 00:03:45.930841 containerd[1563]: time="2025-05-13T00:03:45.930789937Z" level=info msg="connecting to shim c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" protocol=ttrpc version=3 May 13 00:03:45.949014 systemd[1]: Started cri-containerd-c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775.scope - libcontainer container c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775. May 13 00:03:45.973230 containerd[1563]: time="2025-05-13T00:03:45.973196564Z" level=info msg="StartContainer for \"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\" returns successfully" May 13 00:03:45.981293 systemd[1]: cri-containerd-c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775.scope: Deactivated successfully. May 13 00:03:45.982801 containerd[1563]: time="2025-05-13T00:03:45.982778479Z" level=info msg="received exit event container_id:\"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\" id:\"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\" pid:4709 exited_at:{seconds:1747094625 nanos:982570657}" May 13 00:03:45.983839 containerd[1563]: time="2025-05-13T00:03:45.983059728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\" id:\"c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775\" pid:4709 exited_at:{seconds:1747094625 nanos:982570657}" May 13 00:03:45.998182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76e3ce9d0f2bbde9435bcb0e57bc99ac8f373fd6abaf104fe636f1bd7b78775-rootfs.mount: Deactivated successfully. May 13 00:03:46.012029 kubelet[2835]: I0513 00:03:46.011996 2835 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:03:46Z","lastTransitionTime":"2025-05-13T00:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:03:46.930747 containerd[1563]: time="2025-05-13T00:03:46.930711515Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:03:46.959141 containerd[1563]: time="2025-05-13T00:03:46.959035408Z" level=info msg="Container 77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9: CDI devices from CRI Config.CDIDevices: []" May 13 00:03:46.985526 containerd[1563]: time="2025-05-13T00:03:46.985494734Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\"" May 13 00:03:46.986400 containerd[1563]: time="2025-05-13T00:03:46.986375913Z" level=info msg="StartContainer for \"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\"" May 13 00:03:46.987187 containerd[1563]: time="2025-05-13T00:03:46.987149492Z" level=info msg="connecting to shim 77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" protocol=ttrpc version=3 May 13 00:03:47.004048 systemd[1]: Started cri-containerd-77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9.scope - libcontainer container 77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9. May 13 00:03:47.020823 systemd[1]: cri-containerd-77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9.scope: Deactivated successfully. May 13 00:03:47.021150 containerd[1563]: time="2025-05-13T00:03:47.021038159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\" id:\"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\" pid:4747 exited_at:{seconds:1747094627 nanos:20898048}" May 13 00:03:47.026606 containerd[1563]: time="2025-05-13T00:03:47.026584855Z" level=info msg="received exit event container_id:\"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\" id:\"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\" pid:4747 exited_at:{seconds:1747094627 nanos:20898048}" May 13 00:03:47.031096 containerd[1563]: time="2025-05-13T00:03:47.031070869Z" level=info msg="StartContainer for \"77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9\" returns successfully" May 13 00:03:47.039704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ec6aa6483e824214350b342c2f4b256d1f7a25906e8aed6c7378c4c6f76fa9-rootfs.mount: Deactivated successfully. May 13 00:03:47.923860 containerd[1563]: time="2025-05-13T00:03:47.923784271Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:03:47.941143 containerd[1563]: time="2025-05-13T00:03:47.941103766Z" level=info msg="Container 692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7: CDI devices from CRI Config.CDIDevices: []" May 13 00:03:47.945573 containerd[1563]: time="2025-05-13T00:03:47.945540095Z" level=info msg="CreateContainer within sandbox \"e407a0a3cf9887dbf4b9e3da97da3edcc69352aa07db3a93148e611d273cd529\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\"" May 13 00:03:47.946222 containerd[1563]: time="2025-05-13T00:03:47.945904266Z" level=info msg="StartContainer for \"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\"" May 13 00:03:47.947502 containerd[1563]: time="2025-05-13T00:03:47.947385771Z" level=info msg="connecting to shim 692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7" address="unix:///run/containerd/s/2208b2ad8f343f6523213907f6a31533ac912bb32bc79791d98e9ded689b3aae" protocol=ttrpc version=3 May 13 00:03:47.973064 systemd[1]: Started cri-containerd-692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7.scope - libcontainer container 692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7. May 13 00:03:47.997558 containerd[1563]: time="2025-05-13T00:03:47.997529661Z" level=info msg="StartContainer for \"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" returns successfully" May 13 00:03:48.100820 containerd[1563]: time="2025-05-13T00:03:48.100788958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"b6f74eca4c1c8c2ac000fed76251c8d58717a7eefd74a6aab73b817d6cc8a627\" pid:4808 exited_at:{seconds:1747094628 nanos:100481782}" May 13 00:03:48.656948 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:03:50.925393 containerd[1563]: time="2025-05-13T00:03:50.925353757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"3ce8cdeeacc6f3b3ec32bba3803f74c677a42827e90e18f396b5904a4842aded\" pid:5247 exit_status:1 exited_at:{seconds:1747094630 nanos:924635469}" May 13 00:03:51.119006 systemd-networkd[1472]: lxc_health: Link UP May 13 00:03:51.127170 systemd-networkd[1472]: lxc_health: Gained carrier May 13 00:03:52.145022 systemd-networkd[1472]: lxc_health: Gained IPv6LL May 13 00:03:52.600962 kubelet[2835]: I0513 00:03:52.600914 2835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bqkw4" podStartSLOduration=8.600893752 podStartE2EDuration="8.600893752s" podCreationTimestamp="2025-05-13 00:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:03:48.939549774 +0000 UTC m=+125.455432877" watchObservedRunningTime="2025-05-13 00:03:52.600893752 +0000 UTC m=+129.116776854" May 13 00:03:53.088980 containerd[1563]: time="2025-05-13T00:03:53.088949313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"0986f901102a29dd4394c653efb61006e15ea67f0484c58c45bd4d63dbd5ce62\" pid:5384 exited_at:{seconds:1747094633 nanos:88620214}" May 13 00:03:55.360188 containerd[1563]: time="2025-05-13T00:03:55.360150293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"f1986a494ec170b88c9f1fa2f3ba6b2d7bf177e2d0308909c22cdf1dd888b8c1\" pid:5411 exited_at:{seconds:1747094635 nanos:359746741}" May 13 00:03:57.453060 containerd[1563]: time="2025-05-13T00:03:57.452982332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"199225ad4cc8e265b6b48cf86da186baa8fc22003dd77a681fd418cfd922ca28\" pid:5433 exited_at:{seconds:1747094637 nanos:452534214}" May 13 00:03:59.520166 containerd[1563]: time="2025-05-13T00:03:59.520129456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692cb2823060c3bb27d7d92c890230b6f31220875b41394b408e89e0853467a7\" id:\"cf5ffe2be10bf43885dc569467eb1911b89a522a78dc17ef3d88cfaad29a1f88\" pid:5455 exited_at:{seconds:1747094639 nanos:519864181}" May 13 00:03:59.524938 sshd[4551]: Connection closed by 147.75.109.163 port 48202 May 13 00:03:59.528977 sshd-session[4548]: pam_unix(sshd:session): session closed for user core May 13 00:03:59.531193 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. May 13 00:03:59.531542 systemd[1]: sshd@26-139.178.70.99:22-147.75.109.163:48202.service: Deactivated successfully. May 13 00:03:59.532715 systemd[1]: session-29.scope: Deactivated successfully. May 13 00:03:59.533382 systemd-logind[1540]: Removed session 29.