Jul 15 11:46:51.651015 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:46:51.651030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:46:51.651037 kernel: Disabled fast string operations Jul 15 11:46:51.651041 kernel: BIOS-provided physical RAM map: Jul 15 11:46:51.651045 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 15 11:46:51.651049 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 15 11:46:51.651055 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 15 11:46:51.651059 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 15 11:46:51.651063 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 15 11:46:51.651067 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 15 11:46:51.651071 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 15 11:46:51.651075 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 15 11:46:51.651079 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 15 11:46:51.651083 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 15 11:46:51.651090 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 15 11:46:51.651095 kernel: NX (Execute Disable) protection: active Jul 15 11:46:51.651099 kernel: SMBIOS 2.7 present. Jul 15 11:46:51.651104 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 15 11:46:51.651109 kernel: vmware: hypercall mode: 0x00 Jul 15 11:46:51.651116 kernel: Hypervisor detected: VMware Jul 15 11:46:51.651124 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 15 11:46:51.651131 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 15 11:46:51.651137 kernel: vmware: using clock offset of 10128199563 ns Jul 15 11:46:51.651145 kernel: tsc: Detected 3408.000 MHz processor Jul 15 11:46:51.651152 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:46:51.651160 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:46:51.651167 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 15 11:46:51.651175 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:46:51.651182 kernel: total RAM covered: 3072M Jul 15 11:46:51.651191 kernel: Found optimal setting for mtrr clean up Jul 15 11:46:51.651200 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 15 11:46:51.651208 kernel: Using GB pages for direct mapping Jul 15 11:46:51.651215 kernel: ACPI: Early table checksum verification disabled Jul 15 11:46:51.651220 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 15 11:46:51.651225 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 15 11:46:51.651229 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 15 11:46:51.651234 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 15 11:46:51.651238 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 15 11:46:51.651242 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 15 11:46:51.651249 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 15 11:46:51.651255 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 15 11:46:51.651262 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 15 11:46:51.651268 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 15 11:46:51.651273 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 15 11:46:51.651280 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 15 11:46:51.651285 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 15 11:46:51.651290 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 15 11:46:51.651295 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 15 11:46:51.651300 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 15 11:46:51.651305 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 15 11:46:51.651310 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 15 11:46:51.651314 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 15 11:46:51.651319 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 15 11:46:51.651325 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 15 11:46:51.651330 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 15 11:46:51.651335 kernel: system APIC only can use physical flat Jul 15 11:46:51.651342 kernel: Setting APIC routing to physical flat. Jul 15 11:46:51.651350 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 15 11:46:51.651355 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 15 11:46:51.651360 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 15 11:46:51.651364 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 15 11:46:51.651369 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 15 11:46:51.651375 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 15 11:46:51.651380 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 15 11:46:51.651385 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 15 11:46:51.651390 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 15 11:46:51.651395 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 15 11:46:51.651400 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 15 11:46:51.651407 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 15 11:46:51.651415 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 15 11:46:51.651422 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 15 11:46:51.651427 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 15 11:46:51.651433 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 15 11:46:51.651438 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 15 11:46:51.651443 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 15 11:46:51.651448 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 15 11:46:51.651452 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 15 11:46:51.651457 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 15 11:46:51.651462 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 15 11:46:51.651467 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 15 11:46:51.651472 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 15 11:46:51.651476 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 15 11:46:51.651482 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 15 11:46:51.651487 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 15 11:46:51.651492 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 15 11:46:51.651497 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 15 11:46:51.651501 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 15 11:46:51.651506 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 15 11:46:51.651511 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 15 11:46:51.651516 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 15 11:46:51.651521 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 15 11:46:51.651525 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 15 11:46:51.651531 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 15 11:46:51.651536 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 15 11:46:51.651541 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 15 11:46:51.651546 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 15 11:46:51.651551 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 15 11:46:51.651555 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 15 11:46:51.651561 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 15 11:46:51.651565 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 15 11:46:51.651570 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 15 11:46:51.651575 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 15 11:46:51.651581 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 15 11:46:51.651588 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 15 11:46:51.651596 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 15 11:46:51.651603 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 15 11:46:51.651608 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 15 11:46:51.651613 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 15 11:46:51.651617 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 15 11:46:51.651622 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 15 11:46:51.651627 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 15 11:46:51.651632 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 15 11:46:51.651638 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 15 11:46:51.651643 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 15 11:46:51.651647 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 15 11:46:51.651652 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 15 11:46:51.651657 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 15 11:46:51.651662 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 15 11:46:51.651672 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 15 11:46:51.651678 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 15 11:46:51.651684 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 15 11:46:51.651689 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 15 11:46:51.651694 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 15 11:46:51.651700 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 15 11:46:51.651705 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 15 11:46:51.651711 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 15 11:46:51.651716 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 15 11:46:51.651721 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 15 11:46:51.651726 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 15 11:46:51.651731 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 15 11:46:51.651737 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 15 11:46:51.651743 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 15 11:46:51.651748 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 15 11:46:51.651753 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 15 11:46:51.651758 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 15 11:46:51.651764 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 15 11:46:51.651769 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 15 11:46:51.651774 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 15 11:46:51.651779 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 15 11:46:51.651784 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 15 11:46:51.651798 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 15 11:46:51.651803 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 15 11:46:51.651808 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 15 11:46:51.651813 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 15 11:46:51.651819 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 15 11:46:51.651826 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 15 11:46:51.651831 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 15 11:46:51.651837 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 15 11:46:51.651842 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 15 11:46:51.651849 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 15 11:46:51.651854 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 15 11:46:51.651859 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 15 11:46:51.651867 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 15 11:46:51.651875 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 15 11:46:51.651881 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 15 11:46:51.651886 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 15 11:46:51.651892 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 15 11:46:51.651897 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 15 11:46:51.651902 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 15 11:46:51.651908 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 15 11:46:51.651914 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 15 11:46:51.651919 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 15 11:46:51.651924 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 15 11:46:51.651929 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 15 11:46:51.651934 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 15 11:46:51.651940 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 15 11:46:51.651945 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 15 11:46:51.651950 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 15 11:46:51.651955 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 15 11:46:51.651962 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 15 11:46:51.651967 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 15 11:46:51.651972 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 15 11:46:51.651977 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 15 11:46:51.651982 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 15 11:46:51.651988 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 15 11:46:51.651993 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 15 11:46:51.651998 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 15 11:46:51.652003 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 15 11:46:51.652009 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 15 11:46:51.652015 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 15 11:46:51.652020 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 15 11:46:51.652025 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 15 11:46:51.652031 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 15 11:46:51.652036 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 15 11:46:51.652041 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 15 11:46:51.652046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 15 11:46:51.652051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 15 11:46:51.652057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 15 11:46:51.652063 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 15 11:46:51.652069 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 15 11:46:51.652075 kernel: Zone ranges: Jul 15 11:46:51.652080 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:46:51.652086 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 15 11:46:51.652091 kernel: Normal empty Jul 15 11:46:51.652096 kernel: Movable zone start for each node Jul 15 11:46:51.652101 kernel: Early memory node ranges Jul 15 11:46:51.652107 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 15 11:46:51.652112 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 15 11:46:51.652119 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 15 11:46:51.652124 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 15 11:46:51.652130 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:46:51.652135 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 15 11:46:51.652141 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 15 11:46:51.652146 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 15 11:46:51.652151 kernel: system APIC only can use physical flat Jul 15 11:46:51.652157 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 15 11:46:51.652163 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 15 11:46:51.652172 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 15 11:46:51.652178 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 15 11:46:51.652183 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 15 11:46:51.652188 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 15 11:46:51.652194 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 15 11:46:51.652199 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 15 11:46:51.652204 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 15 11:46:51.652214 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 15 11:46:51.652219 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 15 11:46:51.652225 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 15 11:46:51.652235 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 15 11:46:51.652243 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 15 11:46:51.652251 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 15 11:46:51.652259 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 15 11:46:51.652267 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 15 11:46:51.652283 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 15 11:46:51.652293 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 15 11:46:51.652298 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 15 11:46:51.652303 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 15 11:46:51.652312 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 15 11:46:51.652318 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 15 11:46:51.652324 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 15 11:46:51.652329 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 15 11:46:51.652334 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 15 11:46:51.652350 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 15 11:46:51.652360 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 15 11:46:51.652372 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 15 11:46:51.652379 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 15 11:46:51.652384 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 15 11:46:51.652393 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 15 11:46:51.652400 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 15 11:46:51.652408 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 15 11:46:51.652414 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 15 11:46:51.652421 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 15 11:46:51.652427 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 15 11:46:51.652432 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 15 11:46:51.652438 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 15 11:46:51.652443 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 15 11:46:51.652452 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 15 11:46:51.652458 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 15 11:46:51.652466 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 15 11:46:51.652479 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 15 11:46:51.652486 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 15 11:46:51.652491 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 15 11:46:51.652496 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 15 11:46:51.652502 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 15 11:46:51.652507 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 15 11:46:51.652514 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 15 11:46:51.652519 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 15 11:46:51.652525 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 15 11:46:51.652530 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 15 11:46:51.652536 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 15 11:46:51.652541 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 15 11:46:51.652546 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 15 11:46:51.652551 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 15 11:46:51.652557 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 15 11:46:51.652562 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 15 11:46:51.652568 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 15 11:46:51.652574 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 15 11:46:51.652579 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 15 11:46:51.652584 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 15 11:46:51.652589 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 15 11:46:51.652594 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 15 11:46:51.652600 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 15 11:46:51.652605 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 15 11:46:51.652611 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 15 11:46:51.652620 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 15 11:46:51.652628 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 15 11:46:51.652636 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 15 11:46:51.652644 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 15 11:46:51.652651 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 15 11:46:51.652660 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 15 11:46:51.652668 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 15 11:46:51.652676 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 15 11:46:51.652684 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 15 11:46:51.652694 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 15 11:46:51.652705 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 15 11:46:51.652713 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 15 11:46:51.652721 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 15 11:46:51.652728 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 15 11:46:51.652736 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 15 11:46:51.652743 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 15 11:46:51.652751 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 15 11:46:51.652759 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 15 11:46:51.652766 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 15 11:46:51.652776 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 15 11:46:51.652785 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 15 11:46:51.652864 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 15 11:46:51.652870 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 15 11:46:51.652875 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 15 11:46:51.652880 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 15 11:46:51.652886 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 15 11:46:51.652891 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 15 11:46:51.652896 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 15 11:46:51.652902 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 15 11:46:51.652908 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 15 11:46:51.652914 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 15 11:46:51.652919 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 15 11:46:51.652924 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 15 11:46:51.652930 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 15 11:46:51.652935 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 15 11:46:51.652940 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 15 11:46:51.652945 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 15 11:46:51.652950 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 15 11:46:51.652957 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 15 11:46:51.652962 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 15 11:46:51.652967 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 15 11:46:51.652973 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 15 11:46:51.652978 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 15 11:46:51.652985 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 15 11:46:51.652993 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 15 11:46:51.653000 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 15 11:46:51.653007 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 15 11:46:51.653014 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 15 11:46:51.653023 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 15 11:46:51.653030 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 15 11:46:51.653038 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 15 11:46:51.653045 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 15 11:46:51.653051 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 15 11:46:51.653056 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 15 11:46:51.653061 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 15 11:46:51.653067 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 15 11:46:51.653072 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 15 11:46:51.653079 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 15 11:46:51.653084 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 15 11:46:51.653089 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 15 11:46:51.653094 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:46:51.653100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 15 11:46:51.653105 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:46:51.653110 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 15 11:46:51.653117 kernel: TSC deadline timer available Jul 15 11:46:51.653126 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 15 11:46:51.653135 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 15 11:46:51.653143 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 15 11:46:51.653151 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:46:51.653159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 15 11:46:51.653167 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 15 11:46:51.653176 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 15 11:46:51.653184 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 15 11:46:51.653189 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 15 11:46:51.653195 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 15 11:46:51.653204 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 15 11:46:51.653212 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 15 11:46:51.653219 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 15 11:46:51.653224 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 15 11:46:51.653237 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 15 11:46:51.653244 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 15 11:46:51.653249 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 15 11:46:51.653255 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 15 11:46:51.653260 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 15 11:46:51.653268 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 15 11:46:51.653273 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 15 11:46:51.653279 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 15 11:46:51.653285 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 15 11:46:51.653290 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 15 11:46:51.653296 kernel: Policy zone: DMA32 Jul 15 11:46:51.653303 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:46:51.653309 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:46:51.653320 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 15 11:46:51.653334 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 15 11:46:51.653343 kernel: printk: log_buf_len min size: 262144 bytes Jul 15 11:46:51.653353 kernel: printk: log_buf_len: 1048576 bytes Jul 15 11:46:51.653359 kernel: printk: early log buf free: 239728(91%) Jul 15 11:46:51.653365 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:46:51.653371 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 11:46:51.653377 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:46:51.653383 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 155976K reserved, 0K cma-reserved) Jul 15 11:46:51.653391 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 15 11:46:51.653396 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:46:51.653402 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:46:51.653409 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:46:51.653415 kernel: rcu: RCU event tracing is enabled. Jul 15 11:46:51.653422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 15 11:46:51.653428 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:46:51.653434 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:46:51.653439 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:46:51.653445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 15 11:46:51.653451 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 15 11:46:51.653457 kernel: random: crng init done Jul 15 11:46:51.653462 kernel: Console: colour VGA+ 80x25 Jul 15 11:46:51.653468 kernel: printk: console [tty0] enabled Jul 15 11:46:51.653474 kernel: printk: console [ttyS0] enabled Jul 15 11:46:51.653481 kernel: ACPI: Core revision 20210730 Jul 15 11:46:51.653486 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 15 11:46:51.653492 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:46:51.653498 kernel: x2apic enabled Jul 15 11:46:51.653504 kernel: Switched APIC routing to physical x2apic. Jul 15 11:46:51.653510 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:46:51.653516 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 15 11:46:51.653522 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 15 11:46:51.653529 kernel: Disabled fast string operations Jul 15 11:46:51.653538 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 15 11:46:51.653548 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 15 11:46:51.653557 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:46:51.653567 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 15 11:46:51.653577 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 15 11:46:51.653587 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 15 11:46:51.653592 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 15 11:46:51.653598 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 15 11:46:51.653607 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 15 11:46:51.653613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:46:51.653619 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:46:51.653625 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 11:46:51.653630 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 15 11:46:51.653636 kernel: GDS: Unknown: Dependent on hypervisor status Jul 15 11:46:51.653642 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 11:46:51.653647 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:46:51.653653 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:46:51.653660 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:46:51.653665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:46:51.653671 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 11:46:51.653677 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:46:51.653683 kernel: pid_max: default: 131072 minimum: 1024 Jul 15 11:46:51.653688 kernel: LSM: Security Framework initializing Jul 15 11:46:51.653694 kernel: SELinux: Initializing. Jul 15 11:46:51.653700 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 11:46:51.653706 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 15 11:46:51.653714 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 15 11:46:51.653720 kernel: Performance Events: Skylake events, core PMU driver. Jul 15 11:46:51.653725 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 15 11:46:51.653731 kernel: core: CPUID marked event: 'instructions' unavailable Jul 15 11:46:51.653737 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 15 11:46:51.653742 kernel: core: CPUID marked event: 'cache references' unavailable Jul 15 11:46:51.653748 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 15 11:46:51.653753 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 15 11:46:51.653761 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 15 11:46:51.653766 kernel: ... version: 1 Jul 15 11:46:51.653772 kernel: ... bit width: 48 Jul 15 11:46:51.653778 kernel: ... generic registers: 4 Jul 15 11:46:51.653784 kernel: ... value mask: 0000ffffffffffff Jul 15 11:46:51.653797 kernel: ... max period: 000000007fffffff Jul 15 11:46:51.653803 kernel: ... fixed-purpose events: 0 Jul 15 11:46:51.653808 kernel: ... event mask: 000000000000000f Jul 15 11:46:51.653814 kernel: signal: max sigframe size: 1776 Jul 15 11:46:51.653820 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:46:51.653827 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 11:46:51.653833 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:46:51.653838 kernel: x86: Booting SMP configuration: Jul 15 11:46:51.653844 kernel: .... node #0, CPUs: #1 Jul 15 11:46:51.653850 kernel: Disabled fast string operations Jul 15 11:46:51.653856 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 15 11:46:51.653861 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 15 11:46:51.653867 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 11:46:51.653873 kernel: smpboot: Max logical packages: 128 Jul 15 11:46:51.653878 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 15 11:46:51.653885 kernel: devtmpfs: initialized Jul 15 11:46:51.653891 kernel: x86/mm: Memory block size: 128MB Jul 15 11:46:51.653896 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 15 11:46:51.653902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:46:51.653908 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 15 11:46:51.653914 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:46:51.653920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:46:51.653925 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:46:51.653932 kernel: audit: type=2000 audit(1752580010.086:1): state=initialized audit_enabled=0 res=1 Jul 15 11:46:51.653938 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:46:51.653944 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:46:51.653949 kernel: cpuidle: using governor menu Jul 15 11:46:51.653955 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 15 11:46:51.653960 kernel: ACPI: bus type PCI registered Jul 15 11:46:51.653966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:46:51.653972 kernel: dca service started, version 1.12.1 Jul 15 11:46:51.653978 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 15 11:46:51.653983 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 15 11:46:51.653992 kernel: PCI: Using configuration type 1 for base access Jul 15 11:46:51.654000 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:46:51.654009 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:46:51.654018 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:46:51.654027 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:46:51.654036 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:46:51.654045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:46:51.654054 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:46:51.654068 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:46:51.654077 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:46:51.654083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:46:51.654089 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 15 11:46:51.654094 kernel: ACPI: Interpreter enabled Jul 15 11:46:51.654101 kernel: ACPI: PM: (supports S0 S1 S5) Jul 15 11:46:51.654107 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:46:51.654113 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:46:51.654119 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 15 11:46:51.654126 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 15 11:46:51.654247 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:46:51.654856 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 15 11:46:51.654918 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 15 11:46:51.654932 kernel: PCI host bridge to bus 0000:00 Jul 15 11:46:51.654988 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:46:51.655034 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 15 11:46:51.655080 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:46:51.655123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:46:51.655165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 15 11:46:51.655208 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 15 11:46:51.655269 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 15 11:46:51.655323 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 15 11:46:51.655382 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 15 11:46:51.655437 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 15 11:46:51.655485 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 15 11:46:51.655533 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 15 11:46:51.655580 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 15 11:46:51.655629 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 15 11:46:51.655677 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 15 11:46:51.655734 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 15 11:46:51.655782 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 15 11:46:51.655850 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 15 11:46:51.655905 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 15 11:46:51.655954 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 15 11:46:51.656001 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 15 11:46:51.656056 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 15 11:46:51.656105 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 15 11:46:51.656160 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 15 11:46:51.656247 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 15 11:46:51.656318 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 15 11:46:51.656394 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:46:51.656475 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 15 11:46:51.656546 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.656598 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.656650 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.656701 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.656756 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.656826 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.656883 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.656933 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.656985 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657034 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657086 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657134 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657189 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657240 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657298 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657347 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657398 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657446 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657500 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657547 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657598 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657645 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.657699 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.657747 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662240 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662312 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662370 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662420 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662473 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662523 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662581 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662630 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662683 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662731 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662784 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662841 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.662895 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.662945 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663005 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663054 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663109 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663158 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663210 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663261 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663311 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663359 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663410 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663459 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663510 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663561 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663613 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663660 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663711 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663759 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663828 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663881 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.663935 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.663984 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.664037 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.664086 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.664137 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.664188 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.664240 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 15 11:46:51.664288 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.664342 kernel: pci_bus 0000:01: extended config space not accessible Jul 15 11:46:51.664393 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 15 11:46:51.664444 kernel: pci_bus 0000:02: extended config space not accessible Jul 15 11:46:51.664453 kernel: acpiphp: Slot [32] registered Jul 15 11:46:51.664461 kernel: acpiphp: Slot [33] registered Jul 15 11:46:51.664467 kernel: acpiphp: Slot [34] registered Jul 15 11:46:51.664472 kernel: acpiphp: Slot [35] registered Jul 15 11:46:51.664479 kernel: acpiphp: Slot [36] registered Jul 15 11:46:51.664484 kernel: acpiphp: Slot [37] registered Jul 15 11:46:51.664490 kernel: acpiphp: Slot [38] registered Jul 15 11:46:51.664496 kernel: acpiphp: Slot [39] registered Jul 15 11:46:51.664501 kernel: acpiphp: Slot [40] registered Jul 15 11:46:51.664507 kernel: acpiphp: Slot [41] registered Jul 15 11:46:51.664514 kernel: acpiphp: Slot [42] registered Jul 15 11:46:51.664520 kernel: acpiphp: Slot [43] registered Jul 15 11:46:51.664525 kernel: acpiphp: Slot [44] registered Jul 15 11:46:51.664531 kernel: acpiphp: Slot [45] registered Jul 15 11:46:51.664537 kernel: acpiphp: Slot [46] registered Jul 15 11:46:51.664542 kernel: acpiphp: Slot [47] registered Jul 15 11:46:51.664548 kernel: acpiphp: Slot [48] registered Jul 15 11:46:51.664554 kernel: acpiphp: Slot [49] registered Jul 15 11:46:51.664559 kernel: acpiphp: Slot [50] registered Jul 15 11:46:51.664565 kernel: acpiphp: Slot [51] registered Jul 15 11:46:51.664572 kernel: acpiphp: Slot [52] registered Jul 15 11:46:51.664578 kernel: acpiphp: Slot [53] registered Jul 15 11:46:51.664583 kernel: acpiphp: Slot [54] registered Jul 15 11:46:51.664589 kernel: acpiphp: Slot [55] registered Jul 15 11:46:51.664595 kernel: acpiphp: Slot [56] registered Jul 15 11:46:51.664600 kernel: acpiphp: Slot [57] registered Jul 15 11:46:51.664606 kernel: acpiphp: Slot [58] registered Jul 15 11:46:51.664612 kernel: acpiphp: Slot [59] registered Jul 15 11:46:51.664617 kernel: acpiphp: Slot [60] registered Jul 15 11:46:51.664624 kernel: acpiphp: Slot [61] registered Jul 15 11:46:51.664630 kernel: acpiphp: Slot [62] registered Jul 15 11:46:51.664635 kernel: acpiphp: Slot [63] registered Jul 15 11:46:51.664683 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 15 11:46:51.664755 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 15 11:46:51.664853 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 15 11:46:51.664902 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 15 11:46:51.664951 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 15 11:46:51.665001 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 15 11:46:51.665120 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 15 11:46:51.665173 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 15 11:46:51.665221 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 15 11:46:51.665276 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 15 11:46:51.665364 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 15 11:46:51.665418 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 15 11:46:51.665472 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 15 11:46:51.665521 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 15 11:46:51.665571 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 15 11:46:51.665621 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 15 11:46:51.665670 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 15 11:46:51.665718 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 15 11:46:51.665768 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 15 11:46:51.665822 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 15 11:46:51.665871 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 15 11:46:51.665920 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 15 11:46:51.665969 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 15 11:46:51.666016 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 15 11:46:51.666063 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 15 11:46:51.666120 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 15 11:46:51.666169 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 15 11:46:51.666230 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 15 11:46:51.666280 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 15 11:46:51.666330 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 15 11:46:51.666379 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 15 11:46:51.666426 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 15 11:46:51.666500 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 15 11:46:51.666550 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 15 11:46:51.666597 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 15 11:46:51.666647 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 15 11:46:51.666695 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 15 11:46:51.666743 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 15 11:46:51.666854 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 15 11:46:51.666908 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 15 11:46:51.666959 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 15 11:46:51.667013 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 15 11:46:51.667064 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 15 11:46:51.667113 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 15 11:46:51.667162 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 15 11:46:51.667210 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 15 11:46:51.667258 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 15 11:46:51.667310 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 15 11:46:51.667359 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 11:46:51.667407 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 15 11:46:51.667456 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 15 11:46:51.667503 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 15 11:46:51.667550 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 15 11:46:51.667598 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 15 11:46:51.667645 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 15 11:46:51.667694 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 15 11:46:51.667742 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 15 11:46:51.667797 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 15 11:46:51.670493 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 15 11:46:51.670550 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 15 11:46:51.670601 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 15 11:46:51.670653 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 15 11:46:51.670706 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 15 11:46:51.670755 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 15 11:46:51.670819 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 15 11:46:51.670876 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 15 11:46:51.670942 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 15 11:46:51.671008 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 15 11:46:51.671065 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 15 11:46:51.671113 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 15 11:46:51.671164 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 15 11:46:51.671212 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 15 11:46:51.671259 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 15 11:46:51.671307 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 15 11:46:51.671354 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 15 11:46:51.671401 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 15 11:46:51.671449 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 15 11:46:51.671497 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 15 11:46:51.671544 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 15 11:46:51.671594 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 15 11:46:51.671642 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 15 11:46:51.671689 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 15 11:46:51.671738 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 15 11:46:51.671784 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 15 11:46:51.672866 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 15 11:46:51.672920 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 15 11:46:51.672973 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 15 11:46:51.673022 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 15 11:46:51.673073 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 15 11:46:51.673122 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 15 11:46:51.673170 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 15 11:46:51.673223 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 15 11:46:51.673272 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 15 11:46:51.673321 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 15 11:46:51.673372 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 15 11:46:51.673422 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 15 11:46:51.673469 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 15 11:46:51.673517 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 15 11:46:51.673565 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 15 11:46:51.673612 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 15 11:46:51.673660 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 15 11:46:51.673716 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 15 11:46:51.673767 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 15 11:46:51.675980 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 15 11:46:51.676039 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 15 11:46:51.676091 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 15 11:46:51.676141 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 15 11:46:51.676192 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 15 11:46:51.676246 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 15 11:46:51.676295 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 15 11:46:51.676346 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 15 11:46:51.676395 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 15 11:46:51.676443 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 15 11:46:51.676489 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 15 11:46:51.676538 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 15 11:46:51.676586 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 15 11:46:51.676634 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 15 11:46:51.676685 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 15 11:46:51.676734 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 15 11:46:51.676780 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 15 11:46:51.676846 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 15 11:46:51.676895 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 15 11:46:51.676941 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 15 11:46:51.676992 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 15 11:46:51.677039 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 15 11:46:51.677089 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 15 11:46:51.677137 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 15 11:46:51.677184 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 15 11:46:51.677231 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 15 11:46:51.677239 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 15 11:46:51.677246 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 15 11:46:51.677252 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 15 11:46:51.677258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:46:51.677264 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 15 11:46:51.677271 kernel: iommu: Default domain type: Translated Jul 15 11:46:51.677277 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:46:51.677324 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 15 11:46:51.677371 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:46:51.677418 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 15 11:46:51.677427 kernel: vgaarb: loaded Jul 15 11:46:51.677433 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:46:51.677439 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:46:51.677445 kernel: PTP clock support registered Jul 15 11:46:51.677452 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:46:51.677458 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:46:51.677464 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 15 11:46:51.677469 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 15 11:46:51.677475 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 15 11:46:51.677481 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 15 11:46:51.677486 kernel: clocksource: Switched to clocksource tsc-early Jul 15 11:46:51.677492 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:46:51.677498 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:46:51.677505 kernel: pnp: PnP ACPI init Jul 15 11:46:51.677557 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 15 11:46:51.677603 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 15 11:46:51.677645 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 15 11:46:51.677693 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 15 11:46:51.677739 kernel: pnp 00:06: [dma 2] Jul 15 11:46:51.677797 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 15 11:46:51.677852 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 15 11:46:51.677896 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 15 11:46:51.677904 kernel: pnp: PnP ACPI: found 8 devices Jul 15 11:46:51.677910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:46:51.677916 kernel: NET: Registered PF_INET protocol family Jul 15 11:46:51.677922 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:46:51.677928 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 15 11:46:51.677936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:46:51.677941 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 11:46:51.677947 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 15 11:46:51.677953 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 15 11:46:51.677959 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 11:46:51.677965 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 15 11:46:51.677971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:46:51.677976 kernel: NET: Registered PF_XDP protocol family Jul 15 11:46:51.678026 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 15 11:46:51.678079 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 15 11:46:51.678128 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 15 11:46:51.678177 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 15 11:46:51.678226 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 15 11:46:51.678274 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 15 11:46:51.678322 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 15 11:46:51.678372 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 15 11:46:51.678420 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 15 11:46:51.678467 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 15 11:46:51.678515 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 15 11:46:51.678564 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 15 11:46:51.678612 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 15 11:46:51.678664 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 15 11:46:51.678712 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 15 11:46:51.678760 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 15 11:46:51.678821 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 15 11:46:51.678870 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 15 11:46:51.678922 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 15 11:46:51.678970 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 15 11:46:51.679018 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 15 11:46:51.679066 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 15 11:46:51.679114 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 15 11:46:51.679162 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 15 11:46:51.679217 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 15 11:46:51.679267 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.679314 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.679362 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.679409 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.679458 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.679505 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.679553 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.679603 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.679650 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.679697 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680102 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680159 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680209 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680258 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680306 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680356 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680403 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680450 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680497 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680544 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680592 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680639 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680686 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.680736 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.680784 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681193 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681245 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681295 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681349 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681397 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681444 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681494 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681542 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681589 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681636 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681683 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681730 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681796 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681845 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681892 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.681942 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.681989 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682036 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682082 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682130 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682176 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682228 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682275 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682322 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682372 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682419 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682465 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682512 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682558 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682605 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682651 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.682699 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.682745 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.687840 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.687902 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.687954 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688003 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688052 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688098 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688146 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688193 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688240 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688286 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688336 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688383 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688429 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688475 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688522 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688569 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688616 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688662 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688709 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688758 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688818 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688865 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.688912 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.688958 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.689005 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 15 11:46:51.689052 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 15 11:46:51.689100 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 15 11:46:51.689149 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 15 11:46:51.689199 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 15 11:46:51.689250 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 15 11:46:51.689297 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 15 11:46:51.689348 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 15 11:46:51.689396 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 15 11:46:51.689445 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 15 11:46:51.689493 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 15 11:46:51.689541 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 15 11:46:51.689592 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 15 11:46:51.689639 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 15 11:46:51.689686 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 15 11:46:51.689733 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 15 11:46:51.689781 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 15 11:46:51.689843 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 15 11:46:51.689891 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 15 11:46:51.689938 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 15 11:46:51.689985 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 15 11:46:51.690031 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 15 11:46:51.690081 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 15 11:46:51.690127 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 15 11:46:51.690175 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 15 11:46:51.690222 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 15 11:46:51.690272 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 15 11:46:51.690319 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 15 11:46:51.690369 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 15 11:46:51.690416 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 15 11:46:51.690464 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 15 11:46:51.690511 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 15 11:46:51.690558 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 15 11:46:51.690605 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 15 11:46:51.690652 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 15 11:46:51.690703 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 15 11:46:51.690750 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 15 11:46:51.690811 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 15 11:46:51.690861 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 15 11:46:51.690908 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 15 11:46:51.690956 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 15 11:46:51.691003 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 15 11:46:51.691050 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 15 11:46:51.691098 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 15 11:46:51.691145 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 15 11:46:51.691193 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 15 11:46:51.691243 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 15 11:46:51.691291 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 15 11:46:51.691337 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 15 11:46:51.691385 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 15 11:46:51.691431 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 15 11:46:51.691479 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 15 11:46:51.691526 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 15 11:46:51.691573 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 15 11:46:51.691619 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 15 11:46:51.691666 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 15 11:46:51.691714 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 15 11:46:51.691761 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 15 11:46:51.691814 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 15 11:46:51.691862 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 15 11:46:51.691909 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 15 11:46:51.691955 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 15 11:46:51.692003 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 15 11:46:51.692051 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 15 11:46:51.692099 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 15 11:46:51.692148 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 15 11:46:51.692195 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 15 11:46:51.692247 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 15 11:46:51.692295 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 15 11:46:51.692342 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 15 11:46:51.692389 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 15 11:46:51.692437 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 15 11:46:51.692483 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 15 11:46:51.692530 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 15 11:46:51.692577 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 15 11:46:51.692626 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 15 11:46:51.692673 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 15 11:46:51.692720 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 15 11:46:51.692767 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 15 11:46:51.692827 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 15 11:46:51.692875 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 15 11:46:51.692923 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 15 11:46:51.692970 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 15 11:46:51.693017 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 15 11:46:51.693368 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 15 11:46:51.693423 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 15 11:46:51.693473 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 15 11:46:51.693523 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 15 11:46:51.693570 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 15 11:46:51.693640 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 15 11:46:51.693972 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 15 11:46:51.694026 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 15 11:46:51.694077 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 15 11:46:51.694126 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 15 11:46:51.694179 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 15 11:46:51.694248 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 15 11:46:51.699323 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 15 11:46:51.699381 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 15 11:46:51.699434 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 15 11:46:51.699483 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 15 11:46:51.699532 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 15 11:46:51.699582 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 15 11:46:51.699630 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 15 11:46:51.699681 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 15 11:46:51.699729 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 15 11:46:51.699777 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 15 11:46:51.699833 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 15 11:46:51.699882 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 15 11:46:51.699929 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 15 11:46:51.699976 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 15 11:46:51.700024 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 15 11:46:51.700071 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 15 11:46:51.700118 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 15 11:46:51.700169 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 15 11:46:51.700397 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 15 11:46:51.700451 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 15 11:46:51.700500 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 15 11:46:51.700545 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 15 11:46:51.700588 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 15 11:46:51.700630 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 15 11:46:51.700671 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 15 11:46:51.700720 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 15 11:46:51.700765 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 15 11:46:51.701047 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 15 11:46:51.701101 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 15 11:46:51.701151 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 15 11:46:51.701223 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 15 11:46:51.701624 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 15 11:46:51.701678 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 15 11:46:51.701730 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 15 11:46:51.701777 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 15 11:46:51.701838 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 15 11:46:51.701892 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 15 11:46:51.701938 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 15 11:46:51.701982 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 15 11:46:51.702034 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 15 11:46:51.702077 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 15 11:46:51.702122 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 15 11:46:51.702170 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 15 11:46:51.702215 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 15 11:46:51.702264 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 15 11:46:51.702309 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 15 11:46:51.702359 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 15 11:46:51.702404 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 15 11:46:51.702451 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 15 11:46:51.702496 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 15 11:46:51.702545 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 15 11:46:51.702589 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 15 11:46:51.702642 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 15 11:46:51.702688 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 15 11:46:51.702732 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 15 11:46:51.702780 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 15 11:46:51.702838 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 15 11:46:51.702886 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 15 11:46:51.702943 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 15 11:46:51.702990 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 15 11:46:51.703035 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 15 11:46:51.703308 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 15 11:46:51.703365 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 15 11:46:51.703419 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 15 11:46:51.703469 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 15 11:46:51.703524 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 15 11:46:51.703569 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 15 11:46:51.703621 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 15 11:46:51.703666 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 15 11:46:51.703714 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 15 11:46:51.703762 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 15 11:46:51.703827 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 15 11:46:51.703875 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 15 11:46:51.703919 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 15 11:46:51.703968 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 15 11:46:51.704013 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 15 11:46:51.704057 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 15 11:46:51.704108 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 15 11:46:51.704155 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 15 11:46:51.704199 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 15 11:46:51.704249 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 15 11:46:51.704294 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 15 11:46:51.704344 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 15 11:46:51.704390 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 15 11:46:51.704441 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 15 11:46:51.704486 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 15 11:46:51.704534 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 15 11:46:51.704579 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 15 11:46:51.704628 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 15 11:46:51.704673 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 15 11:46:51.704723 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 15 11:46:51.704768 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 15 11:46:51.704821 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 15 11:46:51.704869 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 15 11:46:51.704914 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 15 11:46:51.704959 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 15 11:46:51.705010 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 15 11:46:51.705057 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 15 11:46:51.705108 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 15 11:46:51.705154 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 15 11:46:51.705201 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 15 11:46:51.705247 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 15 11:46:51.705299 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 15 11:46:51.705344 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 15 11:46:51.705393 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 15 11:46:51.705438 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 15 11:46:51.705485 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 15 11:46:51.705531 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 15 11:46:51.705586 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 11:46:51.705595 kernel: PCI: CLS 32 bytes, default 64 Jul 15 11:46:51.705601 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 11:46:51.705608 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 15 11:46:51.705614 kernel: clocksource: Switched to clocksource tsc Jul 15 11:46:51.705620 kernel: Initialise system trusted keyrings Jul 15 11:46:51.705627 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 15 11:46:51.705633 kernel: Key type asymmetric registered Jul 15 11:46:51.705639 kernel: Asymmetric key parser 'x509' registered Jul 15 11:46:51.705647 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:46:51.705653 kernel: io scheduler mq-deadline registered Jul 15 11:46:51.705659 kernel: io scheduler kyber registered Jul 15 11:46:51.705665 kernel: io scheduler bfq registered Jul 15 11:46:51.705716 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 15 11:46:51.705767 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.705825 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 15 11:46:51.705874 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.705924 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 15 11:46:51.705974 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706023 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 15 11:46:51.706073 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706123 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 15 11:46:51.706172 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706230 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 15 11:46:51.706296 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706386 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 15 11:46:51.706450 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706504 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 15 11:46:51.706555 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706603 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 15 11:46:51.706654 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.706702 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 15 11:46:51.706751 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.707018 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 15 11:46:51.707075 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.707128 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 15 11:46:51.707178 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.707228 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 15 11:46:51.707598 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.707884 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 15 11:46:51.707948 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708003 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 15 11:46:51.708054 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708108 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 15 11:46:51.708157 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708207 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 15 11:46:51.708277 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708326 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 15 11:46:51.708375 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708424 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 15 11:46:51.708490 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708547 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 15 11:46:51.708596 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708648 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 15 11:46:51.708697 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.708746 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 15 11:46:51.709151 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709215 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 15 11:46:51.709270 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709320 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 15 11:46:51.709592 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709654 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 15 11:46:51.709705 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709756 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 15 11:46:51.709819 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709873 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 15 11:46:51.709922 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.709971 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 15 11:46:51.710019 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.710066 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 15 11:46:51.710117 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.710165 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 15 11:46:51.710213 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.710261 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 15 11:46:51.710309 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.710358 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 15 11:46:51.710407 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 15 11:46:51.710416 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:46:51.710422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:46:51.710429 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:46:51.710435 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 15 11:46:51.710441 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:46:51.710448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:46:51.710501 kernel: rtc_cmos 00:01: registered as rtc0 Jul 15 11:46:51.710549 kernel: rtc_cmos 00:01: setting system clock to 2025-07-15T11:46:51 UTC (1752580011) Jul 15 11:46:51.710593 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 15 11:46:51.710602 kernel: intel_pstate: CPU model not supported Jul 15 11:46:51.710608 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:46:51.710614 kernel: Segment Routing with IPv6 Jul 15 11:46:51.710620 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:46:51.710626 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:46:51.710634 kernel: Key type dns_resolver registered Jul 15 11:46:51.710640 kernel: IPI shorthand broadcast: enabled Jul 15 11:46:51.710647 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:46:51.710653 kernel: sched_clock: Marking stable (882692421, 226272537)->(1175966961, -67002003) Jul 15 11:46:51.710659 kernel: registered taskstats version 1 Jul 15 11:46:51.710665 kernel: Loading compiled-in X.509 certificates Jul 15 11:46:51.710671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:46:51.710677 kernel: Key type .fscrypt registered Jul 15 11:46:51.710683 kernel: Key type fscrypt-provisioning registered Jul 15 11:46:51.710691 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:46:51.710697 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:46:51.710703 kernel: ima: No architecture policies found Jul 15 11:46:51.710709 kernel: clk: Disabling unused clocks Jul 15 11:46:51.710715 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:46:51.710721 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:46:51.710728 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:46:51.710734 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:46:51.710741 kernel: Run /init as init process Jul 15 11:46:51.710748 kernel: with arguments: Jul 15 11:46:51.710754 kernel: /init Jul 15 11:46:51.710760 kernel: with environment: Jul 15 11:46:51.710766 kernel: HOME=/ Jul 15 11:46:51.710771 kernel: TERM=linux Jul 15 11:46:51.710777 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:46:51.710785 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:46:51.711046 systemd[1]: Detected virtualization vmware. Jul 15 11:46:51.711056 systemd[1]: Detected architecture x86-64. Jul 15 11:46:51.711063 systemd[1]: Running in initrd. Jul 15 11:46:51.711069 systemd[1]: No hostname configured, using default hostname. Jul 15 11:46:51.711075 systemd[1]: Hostname set to . Jul 15 11:46:51.711082 systemd[1]: Initializing machine ID from random generator. Jul 15 11:46:51.711088 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:46:51.711094 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:46:51.711100 systemd[1]: Reached target cryptsetup.target. Jul 15 11:46:51.711108 systemd[1]: Reached target paths.target. Jul 15 11:46:51.711114 systemd[1]: Reached target slices.target. Jul 15 11:46:51.711120 systemd[1]: Reached target swap.target. Jul 15 11:46:51.711126 systemd[1]: Reached target timers.target. Jul 15 11:46:51.711132 systemd[1]: Listening on iscsid.socket. Jul 15 11:46:51.711139 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:46:51.711145 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:46:51.711152 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:46:51.711159 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:46:51.711165 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:46:51.711171 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:46:51.711178 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:46:51.711184 systemd[1]: Reached target sockets.target. Jul 15 11:46:51.711190 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:46:51.711196 systemd[1]: Finished network-cleanup.service. Jul 15 11:46:51.711203 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:46:51.711210 systemd[1]: Starting systemd-journald.service... Jul 15 11:46:51.711217 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:46:51.711239 systemd[1]: Starting systemd-resolved.service... Jul 15 11:46:51.711247 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:46:51.711256 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:46:51.711262 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:46:51.711269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:46:51.711275 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:46:51.711491 kernel: audit: type=1130 audit(1752580011.656:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.711503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:46:51.711510 kernel: audit: type=1130 audit(1752580011.661:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.711517 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:46:51.711524 systemd[1]: Started systemd-resolved.service. Jul 15 11:46:51.711530 systemd[1]: Reached target nss-lookup.target. Jul 15 11:46:51.711536 kernel: audit: type=1130 audit(1752580011.683:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.711543 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:46:51.711549 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:46:51.711557 kernel: audit: type=1130 audit(1752580011.694:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.711563 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:46:51.711570 kernel: Bridge firewalling registered Jul 15 11:46:51.711579 systemd-journald[216]: Journal started Jul 15 11:46:51.711830 systemd-journald[216]: Runtime Journal (/run/log/journal/c188000bc1c44454b8ac5a911be26a67) is 4.8M, max 38.8M, 34.0M free. Jul 15 11:46:51.715269 systemd[1]: Started systemd-journald.service. Jul 15 11:46:51.715285 kernel: audit: type=1130 audit(1752580011.710:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.652496 systemd-modules-load[217]: Inserted module 'overlay' Jul 15 11:46:51.678116 systemd-resolved[218]: Positive Trust Anchors: Jul 15 11:46:51.678121 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:46:51.678142 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:46:51.682822 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 15 11:46:51.707298 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 15 11:46:51.718657 dracut-cmdline[232]: dracut-dracut-053 Jul 15 11:46:51.718657 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 15 11:46:51.718657 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:46:51.729810 kernel: SCSI subsystem initialized Jul 15 11:46:51.741618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:46:51.741655 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:46:51.741664 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:46:51.744059 systemd-modules-load[217]: Inserted module 'dm_multipath' Jul 15 11:46:51.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.744652 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:46:51.747926 kernel: audit: type=1130 audit(1752580011.742:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.745182 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:46:51.750970 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:46:51.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.753802 kernel: audit: type=1130 audit(1752580011.749:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.757804 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:46:51.768805 kernel: iscsi: registered transport (tcp) Jul 15 11:46:51.787808 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:46:51.787848 kernel: QLogic iSCSI HBA Driver Jul 15 11:46:51.804795 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:46:51.805415 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:46:51.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.808806 kernel: audit: type=1130 audit(1752580011.803:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:51.842815 kernel: raid6: avx2x4 gen() 48178 MB/s Jul 15 11:46:51.859814 kernel: raid6: avx2x4 xor() 20589 MB/s Jul 15 11:46:51.876809 kernel: raid6: avx2x2 gen() 49022 MB/s Jul 15 11:46:51.893813 kernel: raid6: avx2x2 xor() 31737 MB/s Jul 15 11:46:51.910813 kernel: raid6: avx2x1 gen() 44835 MB/s Jul 15 11:46:51.927816 kernel: raid6: avx2x1 xor() 27693 MB/s Jul 15 11:46:51.944814 kernel: raid6: sse2x4 gen() 20941 MB/s Jul 15 11:46:51.961809 kernel: raid6: sse2x4 xor() 11481 MB/s Jul 15 11:46:51.978806 kernel: raid6: sse2x2 gen() 21480 MB/s Jul 15 11:46:51.995808 kernel: raid6: sse2x2 xor() 13181 MB/s Jul 15 11:46:52.012805 kernel: raid6: sse2x1 gen() 18106 MB/s Jul 15 11:46:52.030015 kernel: raid6: sse2x1 xor() 8817 MB/s Jul 15 11:46:52.030056 kernel: raid6: using algorithm avx2x2 gen() 49022 MB/s Jul 15 11:46:52.030064 kernel: raid6: .... xor() 31737 MB/s, rmw enabled Jul 15 11:46:52.031205 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:46:52.039805 kernel: xor: automatically using best checksumming function avx Jul 15 11:46:52.100810 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:46:52.105415 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:46:52.106049 systemd[1]: Starting systemd-udevd.service... Jul 15 11:46:52.108803 kernel: audit: type=1130 audit(1752580012.103:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:52.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:52.103000 audit: BPF prog-id=7 op=LOAD Jul 15 11:46:52.104000 audit: BPF prog-id=8 op=LOAD Jul 15 11:46:52.117003 systemd-udevd[415]: Using default interface naming scheme 'v252'. Jul 15 11:46:52.119785 systemd[1]: Started systemd-udevd.service. Jul 15 11:46:52.120306 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:46:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:52.127578 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 15 11:46:52.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:52.144760 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:46:52.145321 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:46:52.208497 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:46:52.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:52.272683 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 15 11:46:52.272717 kernel: vmw_pvscsi: using 64bit dma Jul 15 11:46:52.272725 kernel: vmw_pvscsi: max_id: 16 Jul 15 11:46:52.272733 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 15 11:46:52.275804 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 15 11:46:52.283809 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 15 11:46:52.310260 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 15 11:46:52.310272 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 15 11:46:52.310284 kernel: vmw_pvscsi: using MSI-X Jul 15 11:46:52.310292 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 15 11:46:52.310367 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 15 11:46:52.310448 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 15 11:46:52.310513 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 15 11:46:52.310610 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:46:52.310619 kernel: libata version 3.00 loaded. Jul 15 11:46:52.310629 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 15 11:46:52.330094 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 15 11:46:52.330177 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:46:52.330191 kernel: AES CTR mode by8 optimization enabled Jul 15 11:46:52.330203 kernel: scsi host1: ata_piix Jul 15 11:46:52.330280 kernel: scsi host2: ata_piix Jul 15 11:46:52.330351 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 15 11:46:52.330360 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 15 11:46:52.330371 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 15 11:46:52.338375 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 11:46:52.338467 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 15 11:46:52.338557 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 15 11:46:52.338649 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 15 11:46:52.338729 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 11:46:52.338738 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 11:46:52.499839 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 15 11:46:52.504918 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 15 11:46:52.531844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 15 11:46:52.548691 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:46:52.548702 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (469) Jul 15 11:46:52.548710 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:46:52.545726 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:46:52.547705 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:46:52.549738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:46:52.551500 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:46:52.551616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:46:52.552267 systemd[1]: Starting disk-uuid.service... Jul 15 11:46:52.580803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 11:46:52.588800 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 11:46:53.594602 disk-uuid[549]: The operation has completed successfully. Jul 15 11:46:53.594851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 11:46:53.633635 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:46:53.633702 systemd[1]: Finished disk-uuid.service. Jul 15 11:46:53.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.634301 systemd[1]: Starting verity-setup.service... Jul 15 11:46:53.645808 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 15 11:46:53.687703 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:46:53.688611 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:46:53.690194 systemd[1]: Finished verity-setup.service. Jul 15 11:46:53.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.743559 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:46:53.743799 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:46:53.744155 systemd[1]: Starting afterburn-network-kargs.service... Jul 15 11:46:53.744625 systemd[1]: Starting ignition-setup.service... Jul 15 11:46:53.765274 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:46:53.765310 kernel: BTRFS info (device sda6): using free space tree Jul 15 11:46:53.765318 kernel: BTRFS info (device sda6): has skinny extents Jul 15 11:46:53.774806 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 15 11:46:53.780751 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:46:53.787743 systemd[1]: Finished ignition-setup.service. Jul 15 11:46:53.788319 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:46:53.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.839208 systemd[1]: Finished afterburn-network-kargs.service. Jul 15 11:46:53.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.839965 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:46:53.894606 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:46:53.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.893000 audit: BPF prog-id=9 op=LOAD Jul 15 11:46:53.895528 systemd[1]: Starting systemd-networkd.service... Jul 15 11:46:53.910341 systemd-networkd[736]: lo: Link UP Jul 15 11:46:53.910345 systemd-networkd[736]: lo: Gained carrier Jul 15 11:46:53.911420 systemd-networkd[736]: Enumeration completed Jul 15 11:46:53.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.911479 systemd[1]: Started systemd-networkd.service. Jul 15 11:46:53.911625 systemd[1]: Reached target network.target. Jul 15 11:46:53.912158 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 15 11:46:53.912267 systemd[1]: Starting iscsiuio.service... Jul 15 11:46:53.916211 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 15 11:46:53.916347 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 15 11:46:53.916262 systemd[1]: Started iscsiuio.service. Jul 15 11:46:53.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.916914 systemd[1]: Starting iscsid.service... Jul 15 11:46:53.918056 systemd-networkd[736]: ens192: Link UP Jul 15 11:46:53.918176 systemd-networkd[736]: ens192: Gained carrier Jul 15 11:46:53.919141 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:46:53.919141 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:46:53.919141 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:46:53.919141 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:46:53.919141 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:46:53.920122 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:46:53.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.920100 systemd[1]: Started iscsid.service. Jul 15 11:46:53.920675 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:46:53.928404 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:46:53.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.928860 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:46:53.929167 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:46:53.929278 systemd[1]: Reached target remote-fs.target. Jul 15 11:46:53.930031 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:46:53.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.935265 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:46:53.949901 ignition[608]: Ignition 2.14.0 Jul 15 11:46:53.949908 ignition[608]: Stage: fetch-offline Jul 15 11:46:53.949945 ignition[608]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:53.949961 ignition[608]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:53.956194 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:53.956277 ignition[608]: parsed url from cmdline: "" Jul 15 11:46:53.956280 ignition[608]: no config URL provided Jul 15 11:46:53.956282 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:46:53.956288 ignition[608]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:46:53.965501 ignition[608]: config successfully fetched Jul 15 11:46:53.965580 ignition[608]: parsing config with SHA512: 2770efa458df5fd06a83413e88001ee714a55b36856969ea340746836f8292f03be822aadd65ec8b63525e2cff52c2fd334b6ea217a227caf095a51cadd1c23b Jul 15 11:46:53.967940 unknown[608]: fetched base config from "system" Jul 15 11:46:53.968110 unknown[608]: fetched user config from "vmware" Jul 15 11:46:53.968645 ignition[608]: fetch-offline: fetch-offline passed Jul 15 11:46:53.968828 ignition[608]: Ignition finished successfully Jul 15 11:46:53.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.969481 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:46:53.969629 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:46:53.970108 systemd[1]: Starting ignition-kargs.service... Jul 15 11:46:53.975737 ignition[756]: Ignition 2.14.0 Jul 15 11:46:53.976021 ignition[756]: Stage: kargs Jul 15 11:46:53.976196 ignition[756]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:53.976358 ignition[756]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:53.977675 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:53.979172 ignition[756]: kargs: kargs passed Jul 15 11:46:53.979319 ignition[756]: Ignition finished successfully Jul 15 11:46:53.980202 systemd[1]: Finished ignition-kargs.service. Jul 15 11:46:53.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.980827 systemd[1]: Starting ignition-disks.service... Jul 15 11:46:53.985663 ignition[762]: Ignition 2.14.0 Jul 15 11:46:53.985954 ignition[762]: Stage: disks Jul 15 11:46:53.986131 ignition[762]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:53.986297 ignition[762]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:53.987623 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:53.989300 ignition[762]: disks: disks passed Jul 15 11:46:53.989340 ignition[762]: Ignition finished successfully Jul 15 11:46:53.989978 systemd[1]: Finished ignition-disks.service. Jul 15 11:46:53.990154 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:46:53.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:53.990266 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:46:53.990425 systemd[1]: Reached target local-fs.target. Jul 15 11:46:53.990582 systemd[1]: Reached target sysinit.target. Jul 15 11:46:53.990733 systemd[1]: Reached target basic.target. Jul 15 11:46:53.991391 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:46:54.002538 systemd-fsck[770]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 15 11:46:54.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:54.004008 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:46:54.004552 systemd[1]: Mounting sysroot.mount... Jul 15 11:46:54.014884 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:46:54.014586 systemd[1]: Mounted sysroot.mount. Jul 15 11:46:54.014710 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:46:54.015746 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:46:54.016097 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:46:54.016119 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:46:54.016132 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:46:54.017551 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:46:54.018159 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:46:54.021210 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:46:54.025182 initrd-setup-root[788]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:46:54.027898 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:46:54.030346 initrd-setup-root[804]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:46:54.068201 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:46:54.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:54.068814 systemd[1]: Starting ignition-mount.service... Jul 15 11:46:54.069277 systemd[1]: Starting sysroot-boot.service... Jul 15 11:46:54.073478 bash[821]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:46:54.078727 ignition[822]: INFO : Ignition 2.14.0 Jul 15 11:46:54.079034 ignition[822]: INFO : Stage: mount Jul 15 11:46:54.079237 ignition[822]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:54.079397 ignition[822]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:54.080890 ignition[822]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:54.082418 ignition[822]: INFO : mount: mount passed Jul 15 11:46:54.082561 ignition[822]: INFO : Ignition finished successfully Jul 15 11:46:54.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:54.083204 systemd[1]: Finished ignition-mount.service. Jul 15 11:46:54.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:54.089284 systemd[1]: Finished sysroot-boot.service. Jul 15 11:46:54.702236 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:46:54.711807 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (831) Jul 15 11:46:54.714196 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:46:54.714217 kernel: BTRFS info (device sda6): using free space tree Jul 15 11:46:54.714224 kernel: BTRFS info (device sda6): has skinny extents Jul 15 11:46:54.717807 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 15 11:46:54.719295 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:46:54.720108 systemd[1]: Starting ignition-files.service... Jul 15 11:46:54.729472 ignition[851]: INFO : Ignition 2.14.0 Jul 15 11:46:54.729472 ignition[851]: INFO : Stage: files Jul 15 11:46:54.729868 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:54.729868 ignition[851]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:54.730883 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:54.733015 ignition[851]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:46:54.733596 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:46:54.733596 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:46:54.735952 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:46:54.736126 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:46:54.736748 unknown[851]: wrote ssh authorized keys file for user: core Jul 15 11:46:54.736978 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:46:54.737348 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 11:46:54.737533 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 11:46:54.784164 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 11:46:55.085052 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 11:46:55.085596 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:46:55.085877 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 11:46:55.552178 systemd-networkd[736]: ens192: Gained IPv6LL Jul 15 11:46:55.562631 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:46:55.624387 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:46:55.624387 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:46:55.624820 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 15 11:46:55.625991 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 15 11:46:55.631414 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4294815104" Jul 15 11:46:55.631655 ignition[851]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4294815104": device or resource busy Jul 15 11:46:55.631655 ignition[851]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4294815104", trying btrfs: device or resource busy Jul 15 11:46:55.631655 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4294815104" Jul 15 11:46:55.633985 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4294815104" Jul 15 11:46:55.634513 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem4294815104" Jul 15 11:46:55.635172 systemd[1]: mnt-oem4294815104.mount: Deactivated successfully. Jul 15 11:46:55.635546 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem4294815104" Jul 15 11:46:55.635728 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 15 11:46:55.635728 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 11:46:55.636115 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 11:46:56.483200 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 15 11:46:56.638026 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 11:46:56.639845 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 15 11:46:56.640032 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Jul 15 11:46:56.640032 ignition[851]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:46:56.641535 ignition[851]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:46:56.739803 ignition[851]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:46:56.740023 ignition[851]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:46:56.740023 ignition[851]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:46:56.740023 ignition[851]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:46:56.740023 ignition[851]: INFO : files: files passed Jul 15 11:46:56.740023 ignition[851]: INFO : Ignition finished successfully Jul 15 11:46:56.741345 systemd[1]: Finished ignition-files.service. Jul 15 11:46:56.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.742970 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:46:56.745210 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 15 11:46:56.745225 kernel: audit: type=1130 audit(1752580016.739:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.745311 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:46:56.746042 systemd[1]: Starting ignition-quench.service... Jul 15 11:46:56.748628 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:46:56.748691 systemd[1]: Finished ignition-quench.service. Jul 15 11:46:56.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.753771 kernel: audit: type=1130 audit(1752580016.747:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.753797 kernel: audit: type=1131 audit(1752580016.747:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.754703 initrd-setup-root-after-ignition[877]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:46:56.754996 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:46:56.757798 kernel: audit: type=1130 audit(1752580016.753:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.755163 systemd[1]: Reached target ignition-complete.target. Jul 15 11:46:56.758312 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:46:56.766771 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:46:56.766998 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:46:56.767288 systemd[1]: Reached target initrd-fs.target. Jul 15 11:46:56.767516 systemd[1]: Reached target initrd.target. Jul 15 11:46:56.767759 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:46:56.772867 kernel: audit: type=1130 audit(1752580016.765:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.772884 kernel: audit: type=1131 audit(1752580016.765:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.770708 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:46:56.780945 kernel: audit: type=1130 audit(1752580016.775:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.777502 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:46:56.778166 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:46:56.784625 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:46:56.784910 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:46:56.785181 systemd[1]: Stopped target timers.target. Jul 15 11:46:56.785429 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:46:56.785627 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:46:56.785947 systemd[1]: Stopped target initrd.target. Jul 15 11:46:56.786198 systemd[1]: Stopped target basic.target. Jul 15 11:46:56.786451 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:46:56.786708 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:46:56.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.789355 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:46:56.789639 systemd[1]: Stopped target remote-fs.target. Jul 15 11:46:56.789815 kernel: audit: type=1131 audit(1752580016.784:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.789928 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:46:56.790185 systemd[1]: Stopped target sysinit.target. Jul 15 11:46:56.790438 systemd[1]: Stopped target local-fs.target. Jul 15 11:46:56.790685 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:46:56.790944 systemd[1]: Stopped target swap.target. Jul 15 11:46:56.791165 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:46:56.791360 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:46:56.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.791690 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:46:56.794294 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:46:56.794520 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:46:56.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.794846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:46:56.797335 kernel: audit: type=1131 audit(1752580016.789:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.797348 kernel: audit: type=1131 audit(1752580016.793:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.794929 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:46:56.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.797695 systemd[1]: Stopped target paths.target. Jul 15 11:46:56.797930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:46:56.801816 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:46:56.802084 systemd[1]: Stopped target slices.target. Jul 15 11:46:56.802330 systemd[1]: Stopped target sockets.target. Jul 15 11:46:56.802571 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:46:56.802747 systemd[1]: Closed iscsid.socket. Jul 15 11:46:56.803003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:46:56.803210 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:46:56.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.803535 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:46:56.803727 systemd[1]: Stopped ignition-files.service. Jul 15 11:46:56.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.804526 systemd[1]: Stopping ignition-mount.service... Jul 15 11:46:56.804907 systemd[1]: Stopping iscsiuio.service... Jul 15 11:46:56.805103 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:46:56.805439 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:46:56.806097 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:46:56.806332 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:46:56.806574 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:46:56.806896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:46:56.807101 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:46:56.811389 ignition[890]: INFO : Ignition 2.14.0 Jul 15 11:46:56.811389 ignition[890]: INFO : Stage: umount Jul 15 11:46:56.811694 ignition[890]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 15 11:46:56.811694 ignition[890]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 15 11:46:56.812398 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:46:56.812615 systemd[1]: Stopped iscsiuio.service. Jul 15 11:46:56.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.814262 ignition[890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 15 11:46:56.814601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:46:56.815292 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:46:56.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.816264 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:46:56.816415 systemd[1]: Closed iscsiuio.socket. Jul 15 11:46:56.818805 ignition[890]: INFO : umount: umount passed Jul 15 11:46:56.818805 ignition[890]: INFO : Ignition finished successfully Jul 15 11:46:56.819663 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:46:56.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.822377 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:46:56.822426 systemd[1]: Stopped ignition-mount.service. Jul 15 11:46:56.822564 systemd[1]: Stopped target network.target. Jul 15 11:46:56.822647 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:46:56.822671 systemd[1]: Stopped ignition-disks.service. Jul 15 11:46:56.822771 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:46:56.822810 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:46:56.822909 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:46:56.822929 systemd[1]: Stopped ignition-setup.service. Jul 15 11:46:56.823076 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:46:56.823198 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:46:56.828404 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:46:56.828453 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:46:56.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.829151 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:46:56.829173 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:46:56.830595 systemd[1]: Stopping network-cleanup.service... Jul 15 11:46:56.830878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:46:56.830908 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:46:56.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.831312 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 15 11:46:56.831338 systemd[1]: Stopped afterburn-network-kargs.service. Jul 15 11:46:56.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.830000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:46:56.831740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:46:56.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.831765 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:46:56.832258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:46:56.832282 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:46:56.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.833710 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:46:56.834809 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:46:56.835084 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:46:56.835141 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:46:56.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.836662 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:46:56.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.836731 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:46:56.837853 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:46:56.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.837899 systemd[1]: Stopped network-cleanup.service. Jul 15 11:46:56.838104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:46:56.838123 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:46:56.838315 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:46:56.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.838332 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:46:56.837000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:46:56.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.838479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:46:56.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.838501 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:46:56.838660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:46:56.838681 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:46:56.838905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:46:56.838925 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:46:56.839386 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:46:56.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.839614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:46:56.839641 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:46:56.843294 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:46:56.843356 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:46:56.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.922413 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:46:56.922482 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:46:56.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.922753 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:46:56.922866 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:46:56.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:56.922892 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:46:56.923436 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:46:56.930246 systemd[1]: Switching root. Jul 15 11:46:56.945766 iscsid[741]: iscsid shutting down. Jul 15 11:46:56.945918 systemd-journald[216]: Received SIGTERM from PID 1 (n/a). Jul 15 11:46:56.945949 systemd-journald[216]: Journal stopped Jul 15 11:46:59.703933 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:46:59.703952 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:46:59.703960 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:46:59.703966 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:46:59.703972 kernel: SELinux: policy capability open_perms=1 Jul 15 11:46:59.703977 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:46:59.703985 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:46:59.703991 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:46:59.703997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:46:59.704002 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:46:59.704008 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:46:59.704015 systemd[1]: Successfully loaded SELinux policy in 46.801ms. Jul 15 11:46:59.704024 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.234ms. Jul 15 11:46:59.704032 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:46:59.704040 systemd[1]: Detected virtualization vmware. Jul 15 11:46:59.704046 systemd[1]: Detected architecture x86-64. Jul 15 11:46:59.704053 systemd[1]: Detected first boot. Jul 15 11:46:59.704061 systemd[1]: Initializing machine ID from random generator. Jul 15 11:46:59.704067 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:46:59.704074 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:46:59.704081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:46:59.704088 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:46:59.704095 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:46:59.704103 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:46:59.704110 systemd[1]: Stopped iscsid.service. Jul 15 11:46:59.704117 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 11:46:59.704124 systemd[1]: Stopped initrd-switch-root.service. Jul 15 11:46:59.704130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 11:46:59.704137 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:46:59.704144 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:46:59.704151 systemd[1]: Created slice system-getty.slice. Jul 15 11:46:59.704158 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:46:59.704165 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:46:59.704172 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:46:59.704178 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:46:59.704185 systemd[1]: Created slice user.slice. Jul 15 11:46:59.704192 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:46:59.704198 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:46:59.704204 systemd[1]: Set up automount boot.automount. Jul 15 11:46:59.704211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:46:59.704219 systemd[1]: Stopped target initrd-switch-root.target. Jul 15 11:46:59.704228 systemd[1]: Stopped target initrd-fs.target. Jul 15 11:46:59.704234 systemd[1]: Stopped target initrd-root-fs.target. Jul 15 11:46:59.704242 systemd[1]: Reached target integritysetup.target. Jul 15 11:46:59.704248 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:46:59.704255 systemd[1]: Reached target remote-fs.target. Jul 15 11:46:59.704262 systemd[1]: Reached target slices.target. Jul 15 11:46:59.704269 systemd[1]: Reached target swap.target. Jul 15 11:46:59.704277 systemd[1]: Reached target torcx.target. Jul 15 11:46:59.704284 systemd[1]: Reached target veritysetup.target. Jul 15 11:46:59.704291 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:46:59.704298 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:46:59.704305 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:46:59.704313 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:46:59.704320 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:46:59.704327 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:46:59.704334 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:46:59.704341 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:46:59.704349 systemd[1]: Mounting media.mount... Jul 15 11:46:59.704356 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:46:59.704363 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:46:59.704370 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:46:59.704378 systemd[1]: Mounting tmp.mount... Jul 15 11:46:59.704385 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:46:59.704392 systemd[1]: Starting ignition-delete-config.service... Jul 15 11:46:59.704399 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:46:59.704406 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:46:59.704413 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:46:59.704420 systemd[1]: Starting modprobe@drm.service... Jul 15 11:46:59.704427 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:46:59.704434 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:46:59.704443 systemd[1]: Starting modprobe@loop.service... Jul 15 11:46:59.704450 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:46:59.704457 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 11:46:59.704464 systemd[1]: Stopped systemd-fsck-root.service. Jul 15 11:46:59.704471 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 11:46:59.704478 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 11:46:59.704485 systemd[1]: Stopped systemd-journald.service. Jul 15 11:46:59.704492 systemd[1]: Starting systemd-journald.service... Jul 15 11:46:59.704499 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:46:59.704507 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:46:59.704514 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:46:59.704521 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:46:59.704528 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 11:46:59.704535 systemd[1]: Stopped verity-setup.service. Jul 15 11:46:59.704542 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:46:59.704549 kernel: fuse: init (API version 7.34) Jul 15 11:46:59.704556 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:46:59.704564 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:46:59.704571 systemd[1]: Mounted media.mount. Jul 15 11:46:59.704580 systemd-journald[1014]: Journal started Jul 15 11:46:59.704607 systemd-journald[1014]: Runtime Journal (/run/log/journal/eadc442e41bd45d68ccbebbfb34c2ebc) is 4.8M, max 38.8M, 34.0M free. Jul 15 11:46:57.401000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:46:57.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:46:57.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:46:57.473000 audit: BPF prog-id=10 op=LOAD Jul 15 11:46:57.473000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:46:57.473000 audit: BPF prog-id=11 op=LOAD Jul 15 11:46:57.473000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:46:57.571000 audit[923]: AVC avc: denied { associate } for pid=923 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 15 11:46:57.571000 audit[923]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b4 a1=c0000cede0 a2=c0000d7040 a3=32 items=0 ppid=906 pid=923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:46:57.571000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:46:57.573000 audit[923]: AVC avc: denied { associate } for pid=923 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 15 11:46:57.573000 audit[923]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=906 pid=923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:46:57.573000 audit: CWD cwd="/" Jul 15 11:46:57.573000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:46:57.573000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:46:57.573000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:46:59.609000 audit: BPF prog-id=12 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:46:59.609000 audit: BPF prog-id=13 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=14 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:46:59.609000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:46:59.609000 audit: BPF prog-id=15 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:46:59.609000 audit: BPF prog-id=16 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=17 op=LOAD Jul 15 11:46:59.609000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:46:59.609000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:46:59.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.623000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:46:59.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.681000 audit: BPF prog-id=18 op=LOAD Jul 15 11:46:59.681000 audit: BPF prog-id=19 op=LOAD Jul 15 11:46:59.681000 audit: BPF prog-id=20 op=LOAD Jul 15 11:46:59.681000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:46:59.681000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:46:59.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.700000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:46:59.700000 audit[1014]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc4dab4520 a2=4000 a3=7ffc4dab45bc items=0 ppid=1 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:46:59.700000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:46:57.568473 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:46:59.608439 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:46:57.569897 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:46:59.608447 systemd[1]: Unnecessary job was removed for dev-sda6.device. Jul 15 11:46:57.569911 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:46:59.611868 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 11:46:57.569934 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 15 11:46:57.569940 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 15 11:46:57.569964 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 15 11:46:57.569972 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 15 11:46:59.707595 systemd[1]: Started systemd-journald.service. Jul 15 11:46:59.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:57.570107 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 15 11:46:59.706433 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:46:57.570133 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:46:59.706708 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:46:59.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.709858 jq[989]: true Jul 15 11:46:57.570141 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:46:59.706867 systemd[1]: Mounted tmp.mount. Jul 15 11:46:57.572196 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 15 11:46:59.708083 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:46:57.572226 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 15 11:46:59.708302 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:46:57.572240 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 Jul 15 11:46:59.708378 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:46:59.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:57.572249 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 15 11:46:59.708595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:46:57.572259 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 Jul 15 11:46:59.708665 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:46:57.572267 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 15 11:46:59.710550 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:46:59.352837 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:46:59.710627 systemd[1]: Finished modprobe@drm.service. Jul 15 11:46:59.352990 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:46:59.710877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:46:59.353062 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:46:59.711153 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:46:59.353170 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:46:59.711437 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:46:59.353201 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 15 11:46:59.711507 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:46:59.712640 jq[1027]: true Jul 15 11:46:59.353243 /usr/lib/systemd/system-generators/torcx-generator[923]: time="2025-07-15T11:46:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 15 11:46:59.711724 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:46:59.713716 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:46:59.716305 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:46:59.716552 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:46:59.718143 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:46:59.719462 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:46:59.719641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:46:59.720888 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:46:59.722319 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:46:59.722857 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:46:59.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.727097 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:46:59.727270 systemd[1]: Reached target network-pre.target. Jul 15 11:46:59.729253 systemd-journald[1014]: Time spent on flushing to /var/log/journal/eadc442e41bd45d68ccbebbfb34c2ebc is 60.102ms for 1990 entries. Jul 15 11:46:59.729253 systemd-journald[1014]: System Journal (/var/log/journal/eadc442e41bd45d68ccbebbfb34c2ebc) is 8.0M, max 584.8M, 576.8M free. Jul 15 11:46:59.817062 systemd-journald[1014]: Received client request to flush runtime journal. Jul 15 11:46:59.817114 kernel: loop: module loaded Jul 15 11:46:59.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.736184 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:46:59.736372 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:46:59.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.737839 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:46:59.738762 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:46:59.761602 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:46:59.761883 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:46:59.762870 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:46:59.766064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:46:59.770068 systemd[1]: Finished modprobe@loop.service. Jul 15 11:46:59.770293 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:46:59.817918 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:46:59.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.832553 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:46:59.868553 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:46:59.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:46:59.869656 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:46:59.875318 udevadm[1053]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 15 11:46:59.876519 ignition[1028]: Ignition 2.14.0 Jul 15 11:46:59.876760 ignition[1028]: deleting config from guestinfo properties Jul 15 11:46:59.882167 ignition[1028]: Successfully deleted config Jul 15 11:46:59.882807 systemd[1]: Finished ignition-delete-config.service. Jul 15 11:46:59.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.185104 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:47:00.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.184000 audit: BPF prog-id=21 op=LOAD Jul 15 11:47:00.184000 audit: BPF prog-id=22 op=LOAD Jul 15 11:47:00.184000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:47:00.184000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:47:00.186256 systemd[1]: Starting systemd-udevd.service... Jul 15 11:47:00.197947 systemd-udevd[1054]: Using default interface naming scheme 'v252'. Jul 15 11:47:00.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.238000 audit: BPF prog-id=23 op=LOAD Jul 15 11:47:00.238476 systemd[1]: Started systemd-udevd.service. Jul 15 11:47:00.241528 systemd[1]: Starting systemd-networkd.service... Jul 15 11:47:00.248000 audit: BPF prog-id=24 op=LOAD Jul 15 11:47:00.248000 audit: BPF prog-id=25 op=LOAD Jul 15 11:47:00.248000 audit: BPF prog-id=26 op=LOAD Jul 15 11:47:00.251123 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:47:00.269342 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 15 11:47:00.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.289340 systemd[1]: Started systemd-userdbd.service. Jul 15 11:47:00.310839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:47:00.316817 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:47:00.337592 systemd-networkd[1065]: lo: Link UP Jul 15 11:47:00.337636 systemd-networkd[1065]: lo: Gained carrier Jul 15 11:47:00.337947 systemd-networkd[1065]: Enumeration completed Jul 15 11:47:00.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.338006 systemd[1]: Started systemd-networkd.service. Jul 15 11:47:00.338010 systemd-networkd[1065]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 15 11:47:00.340824 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 15 11:47:00.340965 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 15 11:47:00.342555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 15 11:47:00.342940 systemd-networkd[1065]: ens192: Link UP Jul 15 11:47:00.343035 systemd-networkd[1065]: ens192: Gained carrier Jul 15 11:47:00.393764 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 15 11:47:00.397259 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 15 11:47:00.397341 kernel: Guest personality initialized and is active Jul 15 11:47:00.400000 audit[1060]: AVC avc: denied { confidentiality } for pid=1060 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:47:00.406184 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 11:47:00.406227 kernel: Initialized host personality Jul 15 11:47:00.400000 audit[1060]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55964756c780 a1=338ac a2=7f2893a22bc5 a3=5 items=110 ppid=1054 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:47:00.400000 audit: CWD cwd="/" Jul 15 11:47:00.400000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=1 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=2 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=3 name=(null) inode=25336 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=4 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=5 name=(null) inode=25337 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=6 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=7 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=8 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=9 name=(null) inode=25339 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=10 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=11 name=(null) inode=25340 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=12 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=13 name=(null) inode=25341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=14 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=15 name=(null) inode=25342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=16 name=(null) inode=25338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=17 name=(null) inode=25343 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=18 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=19 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=20 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=21 name=(null) inode=25345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=22 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=23 name=(null) inode=25346 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=24 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=25 name=(null) inode=25347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=26 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=27 name=(null) inode=25348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=28 name=(null) inode=25344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=29 name=(null) inode=25349 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=30 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=31 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=32 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=33 name=(null) inode=25351 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=34 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=35 name=(null) inode=25352 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=36 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=37 name=(null) inode=25353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=38 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=39 name=(null) inode=25354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=40 name=(null) inode=25350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=41 name=(null) inode=25355 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=42 name=(null) inode=25335 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=43 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=44 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=45 name=(null) inode=25357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=46 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=47 name=(null) inode=25358 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=48 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=49 name=(null) inode=25359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=50 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=51 name=(null) inode=25360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=52 name=(null) inode=25356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=53 name=(null) inode=25361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=55 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=56 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=57 name=(null) inode=25363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=58 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=59 name=(null) inode=25364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.416916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:47:00.400000 audit: PATH item=60 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=61 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=62 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=63 name=(null) inode=25366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=64 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=65 name=(null) inode=25367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=66 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=67 name=(null) inode=25368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=68 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=69 name=(null) inode=25369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=70 name=(null) inode=25365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=71 name=(null) inode=25370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=72 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=73 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=74 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=75 name=(null) inode=25372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=76 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=77 name=(null) inode=25373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=78 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=79 name=(null) inode=25374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=80 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=81 name=(null) inode=25375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=82 name=(null) inode=25371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=83 name=(null) inode=25376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=84 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=85 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=86 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=87 name=(null) inode=25378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=88 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=89 name=(null) inode=25379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=90 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=91 name=(null) inode=25380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=92 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=93 name=(null) inode=25381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=94 name=(null) inode=25377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=95 name=(null) inode=25382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=96 name=(null) inode=25362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=97 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=98 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=99 name=(null) inode=25384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=100 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=101 name=(null) inode=25385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=102 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=103 name=(null) inode=25386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=104 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=105 name=(null) inode=25387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=106 name=(null) inode=25383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=107 name=(null) inode=25388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PATH item=109 name=(null) inode=25389 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:47:00.400000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:47:00.431806 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:47:00.434804 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:47:00.436817 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 15 11:47:00.437952 (udev-worker)[1063]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 15 11:47:00.447034 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:47:00.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.448116 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:47:00.466158 lvm[1087]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:47:00.488519 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:47:00.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.488763 systemd[1]: Reached target cryptsetup.target. Jul 15 11:47:00.489927 systemd[1]: Starting lvm2-activation.service... Jul 15 11:47:00.493273 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:47:00.512498 systemd[1]: Finished lvm2-activation.service. Jul 15 11:47:00.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.512741 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:47:00.512896 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:47:00.512920 systemd[1]: Reached target local-fs.target. Jul 15 11:47:00.513047 systemd[1]: Reached target machines.target. Jul 15 11:47:00.514358 systemd[1]: Starting ldconfig.service... Jul 15 11:47:00.515250 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:47:00.515294 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:47:00.516416 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:47:00.517305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:47:00.518928 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:47:00.519996 systemd[1]: Starting systemd-sysext.service... Jul 15 11:47:00.528835 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1090 (bootctl) Jul 15 11:47:00.529529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:47:00.540597 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:47:00.550671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:47:00.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.560183 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:47:00.560284 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:47:00.596818 kernel: loop0: detected capacity change from 0 to 224512 Jul 15 11:47:00.894646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:47:00.895754 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:47:00.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.914927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:47:00.938831 kernel: loop1: detected capacity change from 0 to 224512 Jul 15 11:47:00.961805 systemd-fsck[1099]: fsck.fat 4.2 (2021-01-31) Jul 15 11:47:00.961805 systemd-fsck[1099]: /dev/sda1: 790 files, 120725/258078 clusters Jul 15 11:47:00.962384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:47:00.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:00.963433 systemd[1]: Mounting boot.mount... Jul 15 11:47:00.977366 (sd-sysext)[1102]: Using extensions 'kubernetes'. Jul 15 11:47:00.978142 (sd-sysext)[1102]: Merged extensions into '/usr'. Jul 15 11:47:00.993414 systemd[1]: Mounted boot.mount. Jul 15 11:47:00.993681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:00.994719 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:47:00.995514 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:47:00.997329 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:47:00.999005 systemd[1]: Starting modprobe@loop.service... Jul 15 11:47:00.999143 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:47:00.999227 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:47:00.999311 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.002006 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:47:01.002278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:47:01.002357 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:47:01.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.002668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:47:01.002739 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:47:01.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.003066 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:47:01.003168 systemd[1]: Finished modprobe@loop.service. Jul 15 11:47:01.003628 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:47:01.003736 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.004537 systemd[1]: Finished systemd-sysext.service. Jul 15 11:47:01.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.005491 systemd[1]: Starting ensure-sysext.service... Jul 15 11:47:01.007275 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:47:01.011779 systemd[1]: Reloading. Jul 15 11:47:01.041927 systemd-tmpfiles[1110]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:47:01.056417 systemd-tmpfiles[1110]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:47:01.065129 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-07-15T11:47:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:47:01.065327 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-07-15T11:47:01Z" level=info msg="torcx already run" Jul 15 11:47:01.073069 systemd-tmpfiles[1110]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:47:01.129592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:47:01.129603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:47:01.143594 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:47:01.184000 audit: BPF prog-id=27 op=LOAD Jul 15 11:47:01.184000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:47:01.185000 audit: BPF prog-id=28 op=LOAD Jul 15 11:47:01.185000 audit: BPF prog-id=29 op=LOAD Jul 15 11:47:01.186000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:47:01.186000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:47:01.186000 audit: BPF prog-id=30 op=LOAD Jul 15 11:47:01.186000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:47:01.186000 audit: BPF prog-id=31 op=LOAD Jul 15 11:47:01.186000 audit: BPF prog-id=32 op=LOAD Jul 15 11:47:01.186000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:47:01.186000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:47:01.187000 audit: BPF prog-id=33 op=LOAD Jul 15 11:47:01.187000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:47:01.187000 audit: BPF prog-id=34 op=LOAD Jul 15 11:47:01.187000 audit: BPF prog-id=35 op=LOAD Jul 15 11:47:01.187000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:47:01.187000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:47:01.192296 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:47:01.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.198573 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.199660 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:47:01.200449 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:47:01.202354 systemd[1]: Starting modprobe@loop.service... Jul 15 11:47:01.202480 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.202554 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:47:01.202816 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.203676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:47:01.203770 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:47:01.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.204105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:47:01.204174 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:47:01.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.204516 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:47:01.204592 systemd[1]: Finished modprobe@loop.service. Jul 15 11:47:01.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.204954 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:47:01.205016 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.205905 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.207238 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:47:01.208900 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:47:01.211077 systemd[1]: Starting modprobe@loop.service... Jul 15 11:47:01.211195 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.211268 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:47:01.211339 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.211861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:47:01.211967 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:47:01.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.212271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:47:01.212342 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:47:01.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.212725 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:47:01.212856 systemd[1]: Finished modprobe@loop.service. Jul 15 11:47:01.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.213150 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:47:01.213225 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.214988 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.215700 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:47:01.217475 systemd[1]: Starting modprobe@drm.service... Jul 15 11:47:01.219638 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:47:01.220411 systemd[1]: Starting modprobe@loop.service... Jul 15 11:47:01.220568 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.220641 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:47:01.222079 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:47:01.222240 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:47:01.223267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:47:01.223364 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:47:01.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.223709 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:47:01.223780 systemd[1]: Finished modprobe@drm.service. Jul 15 11:47:01.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.224078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:47:01.224143 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:47:01.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.224495 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:47:01.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.225348 systemd[1]: Finished ensure-sysext.service. Jul 15 11:47:01.225783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:47:01.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.225947 systemd[1]: Finished modprobe@loop.service. Jul 15 11:47:01.226093 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:47:01.258482 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:47:01.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.259603 systemd[1]: Starting audit-rules.service... Jul 15 11:47:01.260469 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:47:01.261299 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:47:01.261000 audit: BPF prog-id=36 op=LOAD Jul 15 11:47:01.262000 audit: BPF prog-id=37 op=LOAD Jul 15 11:47:01.263876 systemd[1]: Starting systemd-resolved.service... Jul 15 11:47:01.265793 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:47:01.266593 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:47:01.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.270924 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:47:01.271086 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:47:01.271000 audit[1207]: SYSTEM_BOOT pid=1207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.273943 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:47:01.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.325263 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:47:01.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.325427 systemd[1]: Reached target time-set.target. Jul 15 11:47:01.333596 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:47:01.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:47:01.336049 systemd-resolved[1205]: Positive Trust Anchors: Jul 15 11:47:01.336212 systemd-resolved[1205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:47:01.336284 systemd-resolved[1205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:47:01.339000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:47:01.339000 audit[1222]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc7bf59f0 a2=420 a3=0 items=0 ppid=1201 pid=1222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:47:01.339000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:47:01.341609 augenrules[1222]: No rules Jul 15 11:47:01.341927 systemd[1]: Finished audit-rules.service. Jul 15 11:48:28.032125 systemd-timesyncd[1206]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Jul 15 11:48:28.032174 systemd-timesyncd[1206]: Initial clock synchronization to Tue 2025-07-15 11:48:28.032034 UTC. Jul 15 11:48:28.073499 systemd-resolved[1205]: Defaulting to hostname 'linux'. Jul 15 11:48:28.074979 systemd[1]: Started systemd-resolved.service. Jul 15 11:48:28.075186 systemd[1]: Reached target network.target. Jul 15 11:48:28.075308 systemd[1]: Reached target nss-lookup.target. Jul 15 11:48:28.120215 ldconfig[1089]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:48:28.142525 systemd[1]: Finished ldconfig.service. Jul 15 11:48:28.143773 systemd[1]: Starting systemd-update-done.service... Jul 15 11:48:28.148292 systemd[1]: Finished systemd-update-done.service. Jul 15 11:48:28.148439 systemd[1]: Reached target sysinit.target. Jul 15 11:48:28.148584 systemd[1]: Started motdgen.path. Jul 15 11:48:28.148720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:48:28.148904 systemd[1]: Started logrotate.timer. Jul 15 11:48:28.149044 systemd[1]: Started mdadm.timer. Jul 15 11:48:28.149149 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:48:28.149250 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:48:28.149273 systemd[1]: Reached target paths.target. Jul 15 11:48:28.149361 systemd[1]: Reached target timers.target. Jul 15 11:48:28.149606 systemd[1]: Listening on dbus.socket. Jul 15 11:48:28.150553 systemd[1]: Starting docker.socket... Jul 15 11:48:28.152458 systemd[1]: Listening on sshd.socket. Jul 15 11:48:28.152675 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:48:28.152919 systemd[1]: Listening on docker.socket. Jul 15 11:48:28.153142 systemd[1]: Reached target sockets.target. Jul 15 11:48:28.153283 systemd[1]: Reached target basic.target. Jul 15 11:48:28.153442 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:48:28.153459 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:48:28.154270 systemd[1]: Starting containerd.service... Jul 15 11:48:28.155202 systemd[1]: Starting dbus.service... Jul 15 11:48:28.156214 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:48:28.157577 systemd[1]: Starting extend-filesystems.service... Jul 15 11:48:28.158259 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:48:28.158997 jq[1232]: false Jul 15 11:48:28.159148 systemd[1]: Starting motdgen.service... Jul 15 11:48:28.161639 systemd[1]: Starting prepare-helm.service... Jul 15 11:48:28.162530 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:48:28.163634 systemd[1]: Starting sshd-keygen.service... Jul 15 11:48:28.167391 systemd[1]: Starting systemd-logind.service... Jul 15 11:48:28.167513 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:48:28.167568 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:48:28.168040 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 11:48:28.168413 systemd[1]: Starting update-engine.service... Jul 15 11:48:28.169216 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:48:28.170403 systemd[1]: Starting vmtoolsd.service... Jul 15 11:48:28.172754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:48:28.172883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:48:28.173485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:48:28.173584 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:48:28.178985 jq[1243]: true Jul 15 11:48:28.180345 systemd[1]: Started vmtoolsd.service. Jul 15 11:48:28.194332 tar[1246]: linux-amd64/LICENSE Jul 15 11:48:28.194502 tar[1246]: linux-amd64/helm Jul 15 11:48:28.196825 jq[1254]: true Jul 15 11:48:28.206109 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:48:28.206239 systemd[1]: Finished motdgen.service. Jul 15 11:48:28.209889 dbus-daemon[1231]: [system] SELinux support is enabled Jul 15 11:48:28.210165 systemd[1]: Started dbus.service. Jul 15 11:48:28.211726 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:48:28.211745 systemd[1]: Reached target system-config.target. Jul 15 11:48:28.211864 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:48:28.211875 systemd[1]: Reached target user-config.target. Jul 15 11:48:28.215969 extend-filesystems[1233]: Found loop1 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda1 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda2 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda3 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found usr Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda4 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda6 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda7 Jul 15 11:48:28.216419 extend-filesystems[1233]: Found sda9 Jul 15 11:48:28.216419 extend-filesystems[1233]: Checking size of /dev/sda9 Jul 15 11:48:28.226079 extend-filesystems[1233]: Old size kept for /dev/sda9 Jul 15 11:48:28.226079 extend-filesystems[1233]: Found sr0 Jul 15 11:48:28.224936 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:48:28.225039 systemd[1]: Finished extend-filesystems.service. Jul 15 11:48:28.250082 kernel: NET: Registered PF_VSOCK protocol family Jul 15 11:48:28.250910 bash[1287]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:48:28.251655 env[1247]: time="2025-07-15T11:48:28.251631222Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:48:28.255352 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:48:28.282275 update_engine[1242]: I0715 11:48:28.281195 1242 main.cc:92] Flatcar Update Engine starting Jul 15 11:48:28.285361 systemd[1]: Started update-engine.service. Jul 15 11:48:28.286768 systemd[1]: Started locksmithd.service. Jul 15 11:48:28.287170 update_engine[1242]: I0715 11:48:28.287150 1242 update_check_scheduler.cc:74] Next update check in 4m41s Jul 15 11:48:28.295290 env[1247]: time="2025-07-15T11:48:28.295264483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:48:28.296148 systemd-logind[1241]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:48:28.296905 systemd-logind[1241]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:48:28.297965 systemd-logind[1241]: New seat seat0. Jul 15 11:48:28.298433 env[1247]: time="2025-07-15T11:48:28.298414108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.301771 env[1247]: time="2025-07-15T11:48:28.301749616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:48:28.302859 systemd[1]: Started systemd-logind.service. Jul 15 11:48:28.303319 env[1247]: time="2025-07-15T11:48:28.303305973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.303486 env[1247]: time="2025-07-15T11:48:28.303473366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:48:28.303571 env[1247]: time="2025-07-15T11:48:28.303555471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.303643 env[1247]: time="2025-07-15T11:48:28.303632683Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:48:28.303686 env[1247]: time="2025-07-15T11:48:28.303675943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.303775 env[1247]: time="2025-07-15T11:48:28.303765666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.303971 env[1247]: time="2025-07-15T11:48:28.303962136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:48:28.304115 env[1247]: time="2025-07-15T11:48:28.304102554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:48:28.304191 env[1247]: time="2025-07-15T11:48:28.304182057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:48:28.304263 env[1247]: time="2025-07-15T11:48:28.304253011Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:48:28.304307 env[1247]: time="2025-07-15T11:48:28.304296818Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:48:28.327746 systemd-networkd[1065]: ens192: Gained IPv6LL Jul 15 11:48:28.329079 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:48:28.329372 systemd[1]: Reached target network-online.target. Jul 15 11:48:28.332171 systemd[1]: Starting kubelet.service... Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340857918Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340886799Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340895295Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340917406Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340932051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340946636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340955147Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340963613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340972000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340979187Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340986133Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.341074 env[1247]: time="2025-07-15T11:48:28.340993343Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342289322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342365997Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342514950Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342533843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342542685Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342570513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342578335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342585664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342592339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342598982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342606139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342613386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342619883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342627438Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:48:28.343872 env[1247]: time="2025-07-15T11:48:28.342695758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.343489 systemd[1]: Started containerd.service. Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342705179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342712477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342720716Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342730205Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342736803Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342749545Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:48:28.344184 env[1247]: time="2025-07-15T11:48:28.342772118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.342893112Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.342926251Z" level=info msg="Connect containerd service" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.342944224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343247853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343288393Z" level=info msg="Start subscribing containerd event" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343322482Z" level=info msg="Start recovering state" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343362374Z" level=info msg="Start event monitor" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343376789Z" level=info msg="Start snapshots syncer" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343383392Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343389447Z" level=info msg="Start streaming server" Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343391372Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343416816Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:48:28.344294 env[1247]: time="2025-07-15T11:48:28.343444386Z" level=info msg="containerd successfully booted in 0.092179s" Jul 15 11:48:28.646433 tar[1246]: linux-amd64/README.md Jul 15 11:48:28.649508 systemd[1]: Finished prepare-helm.service. Jul 15 11:48:28.709034 locksmithd[1293]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:48:28.909364 sshd_keygen[1263]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:48:28.923070 systemd[1]: Finished sshd-keygen.service. Jul 15 11:48:28.924273 systemd[1]: Starting issuegen.service... Jul 15 11:48:28.927524 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:48:28.927625 systemd[1]: Finished issuegen.service. Jul 15 11:48:28.928764 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:48:28.937472 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:48:28.938512 systemd[1]: Started getty@tty1.service. Jul 15 11:48:28.939367 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:48:28.939582 systemd[1]: Reached target getty.target. Jul 15 11:48:30.065490 systemd[1]: Started kubelet.service. Jul 15 11:48:30.065875 systemd[1]: Reached target multi-user.target. Jul 15 11:48:30.066918 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:48:30.072299 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:48:30.072408 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:48:30.072611 systemd[1]: Startup finished in 917ms (kernel) + 5.776s (initrd) + 6.077s (userspace) = 12.771s. Jul 15 11:48:30.145187 login[1359]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 15 11:48:30.146286 login[1360]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 15 11:48:30.160135 systemd[1]: Created slice user-500.slice. Jul 15 11:48:30.160979 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:48:30.164335 systemd-logind[1241]: New session 2 of user core. Jul 15 11:48:30.167066 systemd-logind[1241]: New session 1 of user core. Jul 15 11:48:30.169610 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:48:30.170680 systemd[1]: Starting user@500.service... Jul 15 11:48:30.175050 (systemd)[1366]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:30.234310 systemd[1366]: Queued start job for default target default.target. Jul 15 11:48:30.234948 systemd[1366]: Reached target paths.target. Jul 15 11:48:30.234965 systemd[1366]: Reached target sockets.target. Jul 15 11:48:30.234974 systemd[1366]: Reached target timers.target. Jul 15 11:48:30.234982 systemd[1366]: Reached target basic.target. Jul 15 11:48:30.235044 systemd[1]: Started user@500.service. Jul 15 11:48:30.235810 systemd[1]: Started session-1.scope. Jul 15 11:48:30.236333 systemd[1]: Started session-2.scope. Jul 15 11:48:30.241123 systemd[1366]: Reached target default.target. Jul 15 11:48:30.241157 systemd[1366]: Startup finished in 62ms. Jul 15 11:48:30.871144 kubelet[1363]: E0715 11:48:30.871108 1363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:48:30.872347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:48:30.872428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:48:40.889833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:48:40.889984 systemd[1]: Stopped kubelet.service. Jul 15 11:48:40.890979 systemd[1]: Starting kubelet.service... Jul 15 11:48:41.214964 systemd[1]: Started kubelet.service. Jul 15 11:48:41.256433 kubelet[1395]: E0715 11:48:41.256409 1395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:48:41.258497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:48:41.258573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:48:51.389947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 11:48:51.390158 systemd[1]: Stopped kubelet.service. Jul 15 11:48:51.391539 systemd[1]: Starting kubelet.service... Jul 15 11:48:51.697349 systemd[1]: Started kubelet.service. Jul 15 11:48:51.795640 kubelet[1405]: E0715 11:48:51.795609 1405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:48:51.796685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:48:51.796763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:48:58.355819 systemd[1]: Created slice system-sshd.slice. Jul 15 11:48:58.356932 systemd[1]: Started sshd@0-139.178.70.105:22-147.75.109.163:42066.service. Jul 15 11:48:58.446933 sshd[1411]: Accepted publickey for core from 147.75.109.163 port 42066 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:48:58.448073 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:58.451095 systemd-logind[1241]: New session 3 of user core. Jul 15 11:48:58.451698 systemd[1]: Started session-3.scope. Jul 15 11:48:58.500620 systemd[1]: Started sshd@1-139.178.70.105:22-147.75.109.163:42068.service. Jul 15 11:48:58.538541 sshd[1416]: Accepted publickey for core from 147.75.109.163 port 42068 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:48:58.539714 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:58.544755 systemd[1]: Started session-4.scope. Jul 15 11:48:58.545005 systemd-logind[1241]: New session 4 of user core. Jul 15 11:48:58.597756 sshd[1416]: pam_unix(sshd:session): session closed for user core Jul 15 11:48:58.600174 systemd[1]: Started sshd@2-139.178.70.105:22-147.75.109.163:42070.service. Jul 15 11:48:58.601174 systemd[1]: sshd@1-139.178.70.105:22-147.75.109.163:42068.service: Deactivated successfully. Jul 15 11:48:58.601625 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:48:58.602046 systemd-logind[1241]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:48:58.602700 systemd-logind[1241]: Removed session 4. Jul 15 11:48:58.635856 sshd[1421]: Accepted publickey for core from 147.75.109.163 port 42070 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:48:58.636505 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:58.639644 systemd-logind[1241]: New session 5 of user core. Jul 15 11:48:58.640219 systemd[1]: Started session-5.scope. Jul 15 11:48:58.688627 sshd[1421]: pam_unix(sshd:session): session closed for user core Jul 15 11:48:58.690808 systemd[1]: sshd@2-139.178.70.105:22-147.75.109.163:42070.service: Deactivated successfully. Jul 15 11:48:58.691159 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:48:58.691567 systemd-logind[1241]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:48:58.692399 systemd[1]: Started sshd@3-139.178.70.105:22-147.75.109.163:42082.service. Jul 15 11:48:58.693048 systemd-logind[1241]: Removed session 5. Jul 15 11:48:58.726071 sshd[1428]: Accepted publickey for core from 147.75.109.163 port 42082 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:48:58.727317 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:58.730483 systemd-logind[1241]: New session 6 of user core. Jul 15 11:48:58.731178 systemd[1]: Started session-6.scope. Jul 15 11:48:58.782812 sshd[1428]: pam_unix(sshd:session): session closed for user core Jul 15 11:48:58.785059 systemd[1]: sshd@3-139.178.70.105:22-147.75.109.163:42082.service: Deactivated successfully. Jul 15 11:48:58.785416 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:48:58.785875 systemd-logind[1241]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:48:58.786574 systemd[1]: Started sshd@4-139.178.70.105:22-147.75.109.163:42096.service. Jul 15 11:48:58.787497 systemd-logind[1241]: Removed session 6. Jul 15 11:48:58.822017 sshd[1434]: Accepted publickey for core from 147.75.109.163 port 42096 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:48:58.823030 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:48:58.826798 systemd-logind[1241]: New session 7 of user core. Jul 15 11:48:58.827261 systemd[1]: Started session-7.scope. Jul 15 11:48:58.903127 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:48:58.903283 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:48:58.919142 systemd[1]: Starting docker.service... Jul 15 11:48:58.943993 env[1447]: time="2025-07-15T11:48:58.943956321Z" level=info msg="Starting up" Jul 15 11:48:58.944720 env[1447]: time="2025-07-15T11:48:58.944708507Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:48:58.944779 env[1447]: time="2025-07-15T11:48:58.944769171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:48:58.944840 env[1447]: time="2025-07-15T11:48:58.944826345Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:48:58.944885 env[1447]: time="2025-07-15T11:48:58.944875463Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:48:58.945888 env[1447]: time="2025-07-15T11:48:58.945876804Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:48:58.945953 env[1447]: time="2025-07-15T11:48:58.945940128Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:48:58.946026 env[1447]: time="2025-07-15T11:48:58.946015872Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:48:58.946082 env[1447]: time="2025-07-15T11:48:58.946069269Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:48:58.997087 env[1447]: time="2025-07-15T11:48:58.997059554Z" level=info msg="Loading containers: start." Jul 15 11:48:59.115073 kernel: Initializing XFRM netlink socket Jul 15 11:48:59.169935 env[1447]: time="2025-07-15T11:48:59.169525296Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:48:59.230182 systemd-networkd[1065]: docker0: Link UP Jul 15 11:48:59.240855 env[1447]: time="2025-07-15T11:48:59.240833963Z" level=info msg="Loading containers: done." Jul 15 11:48:59.249281 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2469763820-merged.mount: Deactivated successfully. Jul 15 11:48:59.264833 env[1447]: time="2025-07-15T11:48:59.264765834Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:48:59.264950 env[1447]: time="2025-07-15T11:48:59.264924508Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:48:59.264998 env[1447]: time="2025-07-15T11:48:59.264984045Z" level=info msg="Daemon has completed initialization" Jul 15 11:48:59.282071 systemd[1]: Started docker.service. Jul 15 11:48:59.285131 env[1447]: time="2025-07-15T11:48:59.285100776Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:49:00.489235 env[1247]: time="2025-07-15T11:49:00.489186447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 11:49:01.066171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838485040.mount: Deactivated successfully. Jul 15 11:49:01.889824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 11:49:01.889959 systemd[1]: Stopped kubelet.service. Jul 15 11:49:01.891034 systemd[1]: Starting kubelet.service... Jul 15 11:49:01.954624 systemd[1]: Started kubelet.service. Jul 15 11:49:01.984693 kubelet[1575]: E0715 11:49:01.984666 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:49:01.985533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:49:01.985609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:49:02.378584 env[1247]: time="2025-07-15T11:49:02.378212973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:02.388735 env[1247]: time="2025-07-15T11:49:02.388708678Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:02.394706 env[1247]: time="2025-07-15T11:49:02.394687919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:02.401199 env[1247]: time="2025-07-15T11:49:02.401175245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:02.401840 env[1247]: time="2025-07-15T11:49:02.401815914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 15 11:49:02.402292 env[1247]: time="2025-07-15T11:49:02.402275768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 11:49:04.019028 env[1247]: time="2025-07-15T11:49:04.018995813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:04.048902 env[1247]: time="2025-07-15T11:49:04.048879091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:04.060978 env[1247]: time="2025-07-15T11:49:04.060950994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:04.082030 env[1247]: time="2025-07-15T11:49:04.081992850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:04.082791 env[1247]: time="2025-07-15T11:49:04.082757104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 15 11:49:04.083227 env[1247]: time="2025-07-15T11:49:04.083211760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 11:49:05.589967 env[1247]: time="2025-07-15T11:49:05.589934290Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:05.590884 env[1247]: time="2025-07-15T11:49:05.590867309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:05.594189 env[1247]: time="2025-07-15T11:49:05.594172612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:05.597005 env[1247]: time="2025-07-15T11:49:05.596987232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:05.598069 env[1247]: time="2025-07-15T11:49:05.597591694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 15 11:49:05.598505 env[1247]: time="2025-07-15T11:49:05.598471773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 11:49:07.546542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345184268.mount: Deactivated successfully. Jul 15 11:49:08.057927 env[1247]: time="2025-07-15T11:49:08.057876679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:08.060044 env[1247]: time="2025-07-15T11:49:08.060020640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:08.061590 env[1247]: time="2025-07-15T11:49:08.061567189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:08.063399 env[1247]: time="2025-07-15T11:49:08.063381755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:08.063622 env[1247]: time="2025-07-15T11:49:08.063603761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 15 11:49:08.063891 env[1247]: time="2025-07-15T11:49:08.063879528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:49:08.664171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864795396.mount: Deactivated successfully. Jul 15 11:49:09.999718 env[1247]: time="2025-07-15T11:49:09.999687264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.037330 env[1247]: time="2025-07-15T11:49:10.037302797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.062286 env[1247]: time="2025-07-15T11:49:10.062259837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.090613 env[1247]: time="2025-07-15T11:49:10.090588222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.091319 env[1247]: time="2025-07-15T11:49:10.091287917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 11:49:10.091747 env[1247]: time="2025-07-15T11:49:10.091730827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:49:10.639704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477300782.mount: Deactivated successfully. Jul 15 11:49:10.649706 env[1247]: time="2025-07-15T11:49:10.649670723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.652262 env[1247]: time="2025-07-15T11:49:10.652241549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.654162 env[1247]: time="2025-07-15T11:49:10.654142030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.655776 env[1247]: time="2025-07-15T11:49:10.655755760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:10.656101 env[1247]: time="2025-07-15T11:49:10.656082139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:49:10.656390 env[1247]: time="2025-07-15T11:49:10.656373026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 11:49:11.249116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708040578.mount: Deactivated successfully. Jul 15 11:49:12.139871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 11:49:12.140034 systemd[1]: Stopped kubelet.service. Jul 15 11:49:12.141250 systemd[1]: Starting kubelet.service... Jul 15 11:49:13.305309 systemd[1]: Started kubelet.service. Jul 15 11:49:13.314189 update_engine[1242]: I0715 11:49:13.314164 1242 update_attempter.cc:509] Updating boot flags... Jul 15 11:49:13.367007 kubelet[1585]: E0715 11:49:13.366977 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:49:13.367894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:49:13.367973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:49:14.435763 env[1247]: time="2025-07-15T11:49:14.435710704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:14.460939 env[1247]: time="2025-07-15T11:49:14.460271465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:14.468194 env[1247]: time="2025-07-15T11:49:14.468171151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:14.486071 env[1247]: time="2025-07-15T11:49:14.486024464Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:14.486266 env[1247]: time="2025-07-15T11:49:14.486243956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 11:49:16.761225 systemd[1]: Stopped kubelet.service. Jul 15 11:49:16.762978 systemd[1]: Starting kubelet.service... Jul 15 11:49:16.780853 systemd[1]: Reloading. Jul 15 11:49:16.829891 /usr/lib/systemd/system-generators/torcx-generator[1653]: time="2025-07-15T11:49:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:49:16.830122 /usr/lib/systemd/system-generators/torcx-generator[1653]: time="2025-07-15T11:49:16Z" level=info msg="torcx already run" Jul 15 11:49:16.915996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:49:16.916008 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:49:16.928977 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:49:16.996026 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 11:49:16.996184 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 11:49:16.996442 systemd[1]: Stopped kubelet.service. Jul 15 11:49:16.998167 systemd[1]: Starting kubelet.service... Jul 15 11:49:18.893285 systemd[1]: Started kubelet.service. Jul 15 11:49:19.022033 kubelet[1717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:49:19.022301 kubelet[1717]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:49:19.022352 kubelet[1717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:49:19.022501 kubelet[1717]: I0715 11:49:19.022481 1717 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:49:19.256562 kubelet[1717]: I0715 11:49:19.256298 1717 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 11:49:19.256562 kubelet[1717]: I0715 11:49:19.256317 1717 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:49:19.256562 kubelet[1717]: I0715 11:49:19.256536 1717 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 11:49:19.408845 kubelet[1717]: I0715 11:49:19.408826 1717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:49:19.409437 kubelet[1717]: E0715 11:49:19.409014 1717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:19.425001 kubelet[1717]: E0715 11:49:19.424972 1717 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:49:19.425182 kubelet[1717]: I0715 11:49:19.425174 1717 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:49:19.427568 kubelet[1717]: I0715 11:49:19.427549 1717 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:49:19.427751 kubelet[1717]: I0715 11:49:19.427723 1717 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:49:19.427860 kubelet[1717]: I0715 11:49:19.427748 1717 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:49:19.430438 kubelet[1717]: I0715 11:49:19.430422 1717 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:49:19.430438 kubelet[1717]: I0715 11:49:19.430437 1717 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 11:49:19.430532 kubelet[1717]: I0715 11:49:19.430519 1717 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:49:19.438774 kubelet[1717]: I0715 11:49:19.438754 1717 kubelet.go:446] "Attempting to sync node with API server" Jul 15 11:49:19.438879 kubelet[1717]: I0715 11:49:19.438870 1717 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:49:19.438931 kubelet[1717]: I0715 11:49:19.438923 1717 kubelet.go:352] "Adding apiserver pod source" Jul 15 11:49:19.438985 kubelet[1717]: I0715 11:49:19.438977 1717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:49:19.510552 kubelet[1717]: I0715 11:49:19.509703 1717 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:49:19.510552 kubelet[1717]: I0715 11:49:19.510089 1717 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:49:19.510552 kubelet[1717]: W0715 11:49:19.510140 1717 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:49:19.539816 kubelet[1717]: W0715 11:49:19.539762 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:19.539920 kubelet[1717]: E0715 11:49:19.539828 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:19.539920 kubelet[1717]: W0715 11:49:19.539891 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:19.539920 kubelet[1717]: E0715 11:49:19.539916 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:19.540009 kubelet[1717]: I0715 11:49:19.539971 1717 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:49:19.540009 kubelet[1717]: I0715 11:49:19.539992 1717 server.go:1287] "Started kubelet" Jul 15 11:49:19.553041 kubelet[1717]: I0715 11:49:19.553008 1717 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:49:19.553933 kubelet[1717]: I0715 11:49:19.553920 1717 server.go:479] "Adding debug handlers to kubelet server" Jul 15 11:49:19.557018 kubelet[1717]: I0715 11:49:19.556005 1717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:49:19.557414 kubelet[1717]: I0715 11:49:19.557401 1717 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:49:19.557549 kubelet[1717]: E0715 11:49:19.556289 1717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18526a5e99c7208c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:49:19.53997838 +0000 UTC m=+0.642861826,LastTimestamp:2025-07-15 11:49:19.53997838 +0000 UTC m=+0.642861826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:49:19.561311 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:49:19.561884 kubelet[1717]: I0715 11:49:19.561380 1717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:49:19.562411 kubelet[1717]: I0715 11:49:19.562399 1717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:49:19.565000 kubelet[1717]: E0715 11:49:19.564987 1717 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:49:19.565393 kubelet[1717]: I0715 11:49:19.565382 1717 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:49:19.565584 kubelet[1717]: E0715 11:49:19.565573 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:19.565681 kubelet[1717]: I0715 11:49:19.565671 1717 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:49:19.565757 kubelet[1717]: I0715 11:49:19.565749 1717 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:49:19.566072 kubelet[1717]: W0715 11:49:19.566023 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:19.566168 kubelet[1717]: E0715 11:49:19.566152 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:19.566291 kubelet[1717]: E0715 11:49:19.566273 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jul 15 11:49:19.566464 kubelet[1717]: I0715 11:49:19.566451 1717 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:49:19.566577 kubelet[1717]: I0715 11:49:19.566563 1717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:49:19.567392 kubelet[1717]: I0715 11:49:19.567382 1717 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:49:19.587398 kubelet[1717]: I0715 11:49:19.587380 1717 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:49:19.587398 kubelet[1717]: I0715 11:49:19.587391 1717 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:49:19.587398 kubelet[1717]: I0715 11:49:19.587403 1717 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:49:19.587831 kubelet[1717]: I0715 11:49:19.587816 1717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:49:19.588704 kubelet[1717]: I0715 11:49:19.588667 1717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:49:19.588704 kubelet[1717]: I0715 11:49:19.588686 1717 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 11:49:19.588704 kubelet[1717]: I0715 11:49:19.588699 1717 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:49:19.588704 kubelet[1717]: I0715 11:49:19.588703 1717 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 11:49:19.588852 kubelet[1717]: E0715 11:49:19.588734 1717 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:49:19.589085 kubelet[1717]: I0715 11:49:19.589066 1717 policy_none.go:49] "None policy: Start" Jul 15 11:49:19.589358 kubelet[1717]: I0715 11:49:19.589143 1717 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:49:19.589424 kubelet[1717]: I0715 11:49:19.589416 1717 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:49:19.590203 kubelet[1717]: W0715 11:49:19.589989 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:19.590203 kubelet[1717]: E0715 11:49:19.590010 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:19.593709 systemd[1]: Created slice kubepods.slice. Jul 15 11:49:19.596843 systemd[1]: Created slice kubepods-burstable.slice. Jul 15 11:49:19.599638 systemd[1]: Created slice kubepods-besteffort.slice. Jul 15 11:49:19.604660 kubelet[1717]: I0715 11:49:19.604644 1717 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:49:19.604755 kubelet[1717]: I0715 11:49:19.604744 1717 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:49:19.604787 kubelet[1717]: I0715 11:49:19.604755 1717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:49:19.605092 kubelet[1717]: I0715 11:49:19.605067 1717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:49:19.605799 kubelet[1717]: E0715 11:49:19.605658 1717 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:49:19.605799 kubelet[1717]: E0715 11:49:19.605697 1717 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:49:19.695744 systemd[1]: Created slice kubepods-burstable-pod2f6b21947194a21892ec8e5d1be2d1dd.slice. Jul 15 11:49:19.705991 kubelet[1717]: I0715 11:49:19.705963 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:19.706334 kubelet[1717]: E0715 11:49:19.706312 1717 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 15 11:49:19.709460 kubelet[1717]: E0715 11:49:19.709394 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:19.711572 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 15 11:49:19.713480 kubelet[1717]: E0715 11:49:19.713440 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:19.715276 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 15 11:49:19.716276 kubelet[1717]: E0715 11:49:19.716257 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:19.767123 kubelet[1717]: I0715 11:49:19.767016 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:19.767123 kubelet[1717]: I0715 11:49:19.767072 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:19.767123 kubelet[1717]: I0715 11:49:19.767093 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:19.767123 kubelet[1717]: I0715 11:49:19.767107 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:19.767123 kubelet[1717]: I0715 11:49:19.767118 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:19.767387 kubelet[1717]: I0715 11:49:19.767131 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:19.767387 kubelet[1717]: I0715 11:49:19.767161 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:19.767387 kubelet[1717]: I0715 11:49:19.767191 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:19.767387 kubelet[1717]: I0715 11:49:19.767211 1717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:49:19.769709 kubelet[1717]: E0715 11:49:19.767617 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jul 15 11:49:19.908314 kubelet[1717]: I0715 11:49:19.908290 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:19.908712 kubelet[1717]: E0715 11:49:19.908688 1717 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 15 11:49:20.011869 env[1247]: time="2025-07-15T11:49:20.011833404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f6b21947194a21892ec8e5d1be2d1dd,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:20.014614 env[1247]: time="2025-07-15T11:49:20.014252428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:20.017266 env[1247]: time="2025-07-15T11:49:20.017212697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:20.168519 kubelet[1717]: E0715 11:49:20.168470 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jul 15 11:49:20.311177 kubelet[1717]: I0715 11:49:20.310944 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:20.311537 kubelet[1717]: E0715 11:49:20.311517 1717 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 15 11:49:20.561996 kubelet[1717]: W0715 11:49:20.561780 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:20.562141 kubelet[1717]: E0715 11:49:20.562128 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:20.960416 kubelet[1717]: W0715 11:49:20.960354 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:20.960416 kubelet[1717]: E0715 11:49:20.960414 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:20.969030 kubelet[1717]: E0715 11:49:20.968991 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jul 15 11:49:21.034825 kubelet[1717]: W0715 11:49:21.034767 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:21.034825 kubelet[1717]: E0715 11:49:21.034816 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:21.113272 kubelet[1717]: I0715 11:49:21.113248 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:21.113453 kubelet[1717]: E0715 11:49:21.113434 1717 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 15 11:49:21.169178 kubelet[1717]: W0715 11:49:21.169145 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:21.169178 kubelet[1717]: E0715 11:49:21.169175 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:21.541170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590941899.mount: Deactivated successfully. Jul 15 11:49:21.551155 env[1247]: time="2025-07-15T11:49:21.551075220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.551946 env[1247]: time="2025-07-15T11:49:21.551931934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.556379 env[1247]: time="2025-07-15T11:49:21.556364430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.558170 env[1247]: time="2025-07-15T11:49:21.558148489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.559010 env[1247]: time="2025-07-15T11:49:21.558988012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.560318 env[1247]: time="2025-07-15T11:49:21.560304028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.562097 env[1247]: time="2025-07-15T11:49:21.562084089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.563938 env[1247]: time="2025-07-15T11:49:21.563916667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.564771 env[1247]: time="2025-07-15T11:49:21.564758665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.567180 env[1247]: time="2025-07-15T11:49:21.567152773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.567752 env[1247]: time="2025-07-15T11:49:21.567731284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.568373 env[1247]: time="2025-07-15T11:49:21.568354245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:21.589794 env[1247]: time="2025-07-15T11:49:21.580839900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:21.589794 env[1247]: time="2025-07-15T11:49:21.580871831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:21.589794 env[1247]: time="2025-07-15T11:49:21.580881379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:21.589794 env[1247]: time="2025-07-15T11:49:21.580994491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68d501afe4f3f885049d6b6d08cd824ebff8b1ad00b17707918476152cf22f17 pid=1758 runtime=io.containerd.runc.v2 Jul 15 11:49:21.591442 env[1247]: time="2025-07-15T11:49:21.591346491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:21.591442 env[1247]: time="2025-07-15T11:49:21.591375410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:21.591442 env[1247]: time="2025-07-15T11:49:21.591382970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:21.595346 env[1247]: time="2025-07-15T11:49:21.591551892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f2f9c6ca88c3ceae420949b67c03ffd75da705e304f958cb97ffcda0bafc761 pid=1776 runtime=io.containerd.runc.v2 Jul 15 11:49:21.598220 kubelet[1717]: E0715 11:49:21.598195 1717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:21.606253 systemd[1]: Started cri-containerd-68d501afe4f3f885049d6b6d08cd824ebff8b1ad00b17707918476152cf22f17.scope. Jul 15 11:49:21.613706 systemd[1]: Started cri-containerd-1f2f9c6ca88c3ceae420949b67c03ffd75da705e304f958cb97ffcda0bafc761.scope. Jul 15 11:49:21.623509 env[1247]: time="2025-07-15T11:49:21.623471155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:21.623643 env[1247]: time="2025-07-15T11:49:21.623626019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:21.623726 env[1247]: time="2025-07-15T11:49:21.623711569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:21.624978 env[1247]: time="2025-07-15T11:49:21.624946326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e2ade1d7ad1c2b01cda39bd5b4c496c206399391026afe610d06e633e05aaa3 pid=1826 runtime=io.containerd.runc.v2 Jul 15 11:49:21.635095 systemd[1]: Started cri-containerd-2e2ade1d7ad1c2b01cda39bd5b4c496c206399391026afe610d06e633e05aaa3.scope. Jul 15 11:49:21.648801 env[1247]: time="2025-07-15T11:49:21.648768877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d501afe4f3f885049d6b6d08cd824ebff8b1ad00b17707918476152cf22f17\"" Jul 15 11:49:21.655801 env[1247]: time="2025-07-15T11:49:21.655489751Z" level=info msg="CreateContainer within sandbox \"68d501afe4f3f885049d6b6d08cd824ebff8b1ad00b17707918476152cf22f17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:49:21.666381 env[1247]: time="2025-07-15T11:49:21.666356393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f6b21947194a21892ec8e5d1be2d1dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f2f9c6ca88c3ceae420949b67c03ffd75da705e304f958cb97ffcda0bafc761\"" Jul 15 11:49:21.667564 env[1247]: time="2025-07-15T11:49:21.667492250Z" level=info msg="CreateContainer within sandbox \"1f2f9c6ca88c3ceae420949b67c03ffd75da705e304f958cb97ffcda0bafc761\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:49:21.681552 env[1247]: time="2025-07-15T11:49:21.681522150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2ade1d7ad1c2b01cda39bd5b4c496c206399391026afe610d06e633e05aaa3\"" Jul 15 11:49:21.695588 env[1247]: time="2025-07-15T11:49:21.695560262Z" level=info msg="CreateContainer within sandbox \"2e2ade1d7ad1c2b01cda39bd5b4c496c206399391026afe610d06e633e05aaa3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:49:21.725042 env[1247]: time="2025-07-15T11:49:21.725011359Z" level=info msg="CreateContainer within sandbox \"68d501afe4f3f885049d6b6d08cd824ebff8b1ad00b17707918476152cf22f17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2bb44dd9a5c256703b1bdafbf8bcbbbde623f7a3077ddca6ed43b4e1e630b855\"" Jul 15 11:49:21.725469 env[1247]: time="2025-07-15T11:49:21.725453780Z" level=info msg="StartContainer for \"2bb44dd9a5c256703b1bdafbf8bcbbbde623f7a3077ddca6ed43b4e1e630b855\"" Jul 15 11:49:21.727823 env[1247]: time="2025-07-15T11:49:21.727797278Z" level=info msg="CreateContainer within sandbox \"1f2f9c6ca88c3ceae420949b67c03ffd75da705e304f958cb97ffcda0bafc761\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7e836759fe45bd23d7a570b11abdcb0e0b2e6e4d15dcd5c2502345667798748\"" Jul 15 11:49:21.728010 env[1247]: time="2025-07-15T11:49:21.727948283Z" level=info msg="CreateContainer within sandbox \"2e2ade1d7ad1c2b01cda39bd5b4c496c206399391026afe610d06e633e05aaa3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fce4578b9dc999b80fc9437a5f2f14ecdb6dcf4940133ea00c75fc677c170cb3\"" Jul 15 11:49:21.728454 env[1247]: time="2025-07-15T11:49:21.728443376Z" level=info msg="StartContainer for \"fce4578b9dc999b80fc9437a5f2f14ecdb6dcf4940133ea00c75fc677c170cb3\"" Jul 15 11:49:21.729153 env[1247]: time="2025-07-15T11:49:21.728456231Z" level=info msg="StartContainer for \"b7e836759fe45bd23d7a570b11abdcb0e0b2e6e4d15dcd5c2502345667798748\"" Jul 15 11:49:21.740965 systemd[1]: Started cri-containerd-2bb44dd9a5c256703b1bdafbf8bcbbbde623f7a3077ddca6ed43b4e1e630b855.scope. Jul 15 11:49:21.754624 systemd[1]: Started cri-containerd-b7e836759fe45bd23d7a570b11abdcb0e0b2e6e4d15dcd5c2502345667798748.scope. Jul 15 11:49:21.764122 systemd[1]: Started cri-containerd-fce4578b9dc999b80fc9437a5f2f14ecdb6dcf4940133ea00c75fc677c170cb3.scope. Jul 15 11:49:21.818259 env[1247]: time="2025-07-15T11:49:21.814414642Z" level=info msg="StartContainer for \"fce4578b9dc999b80fc9437a5f2f14ecdb6dcf4940133ea00c75fc677c170cb3\" returns successfully" Jul 15 11:49:21.818429 env[1247]: time="2025-07-15T11:49:21.818406958Z" level=info msg="StartContainer for \"b7e836759fe45bd23d7a570b11abdcb0e0b2e6e4d15dcd5c2502345667798748\" returns successfully" Jul 15 11:49:21.829932 env[1247]: time="2025-07-15T11:49:21.829901028Z" level=info msg="StartContainer for \"2bb44dd9a5c256703b1bdafbf8bcbbbde623f7a3077ddca6ed43b4e1e630b855\" returns successfully" Jul 15 11:49:22.569861 kubelet[1717]: E0715 11:49:22.569826 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="3.2s" Jul 15 11:49:22.581264 kubelet[1717]: W0715 11:49:22.581197 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:22.581264 kubelet[1717]: E0715 11:49:22.581240 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:22.597579 kubelet[1717]: E0715 11:49:22.597417 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:22.599118 kubelet[1717]: E0715 11:49:22.599106 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:22.600377 kubelet[1717]: E0715 11:49:22.600365 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:22.616130 kubelet[1717]: W0715 11:49:22.616095 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:22.616242 kubelet[1717]: E0715 11:49:22.616227 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:22.714947 kubelet[1717]: I0715 11:49:22.714931 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:22.715277 kubelet[1717]: E0715 11:49:22.715263 1717 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 15 11:49:22.964764 kubelet[1717]: W0715 11:49:22.964717 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:22.964964 kubelet[1717]: E0715 11:49:22.964940 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:23.008653 kubelet[1717]: W0715 11:49:23.008615 1717 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 15 11:49:23.008780 kubelet[1717]: E0715 11:49:23.008767 1717 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:49:23.601685 kubelet[1717]: E0715 11:49:23.601667 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:23.602061 kubelet[1717]: E0715 11:49:23.602046 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:24.928813 kubelet[1717]: E0715 11:49:24.928796 1717 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 15 11:49:25.280891 kubelet[1717]: E0715 11:49:25.280795 1717 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 15 11:49:25.704547 kubelet[1717]: E0715 11:49:25.704523 1717 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 15 11:49:25.773278 kubelet[1717]: E0715 11:49:25.773252 1717 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:49:25.916932 kubelet[1717]: I0715 11:49:25.916917 1717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:25.920960 kubelet[1717]: E0715 11:49:25.920944 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:25.930115 kubelet[1717]: I0715 11:49:25.930098 1717 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:49:25.930357 kubelet[1717]: E0715 11:49:25.930348 1717 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:49:25.937165 kubelet[1717]: E0715 11:49:25.937147 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.037816 kubelet[1717]: E0715 11:49:26.037444 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.138082 kubelet[1717]: E0715 11:49:26.138043 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.238336 kubelet[1717]: E0715 11:49:26.238305 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.339323 kubelet[1717]: E0715 11:49:26.339258 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.439930 kubelet[1717]: E0715 11:49:26.439903 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.509220 systemd[1]: Reloading. Jul 15 11:49:26.514185 kubelet[1717]: E0715 11:49:26.514165 1717 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:49:26.540908 kubelet[1717]: E0715 11:49:26.540882 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.573854 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2025-07-15T11:49:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:49:26.573871 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2025-07-15T11:49:26Z" level=info msg="torcx already run" Jul 15 11:49:26.641384 kubelet[1717]: E0715 11:49:26.641360 1717 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:26.645637 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:49:26.645745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:49:26.658073 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:49:26.732476 systemd[1]: Stopping kubelet.service... Jul 15 11:49:26.752368 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:49:26.752543 systemd[1]: Stopped kubelet.service. Jul 15 11:49:26.753991 systemd[1]: Starting kubelet.service... Jul 15 11:49:27.558520 systemd[1]: Started kubelet.service. Jul 15 11:49:27.613978 kubelet[2070]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:49:27.613978 kubelet[2070]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:49:27.613978 kubelet[2070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:49:27.613978 kubelet[2070]: I0715 11:49:27.613661 2070 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:49:27.623187 kubelet[2070]: I0715 11:49:27.623146 2070 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 11:49:27.623289 kubelet[2070]: I0715 11:49:27.623280 2070 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:49:27.623495 kubelet[2070]: I0715 11:49:27.623486 2070 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 11:49:27.626624 kubelet[2070]: I0715 11:49:27.626610 2070 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:49:27.636420 sudo[2083]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:49:27.636564 sudo[2083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:49:27.642484 kubelet[2070]: I0715 11:49:27.642463 2070 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:49:27.651210 kubelet[2070]: E0715 11:49:27.650467 2070 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:49:27.651210 kubelet[2070]: I0715 11:49:27.650504 2070 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:49:27.655748 kubelet[2070]: I0715 11:49:27.653502 2070 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:49:27.658064 kubelet[2070]: I0715 11:49:27.656504 2070 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:49:27.658064 kubelet[2070]: I0715 11:49:27.656540 2070 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:49:27.659911 kubelet[2070]: I0715 11:49:27.659220 2070 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:49:27.659911 kubelet[2070]: I0715 11:49:27.659235 2070 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 11:49:27.660789 kubelet[2070]: I0715 11:49:27.660777 2070 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:49:27.660959 kubelet[2070]: I0715 11:49:27.660947 2070 kubelet.go:446] "Attempting to sync node with API server" Jul 15 11:49:27.660995 kubelet[2070]: I0715 11:49:27.660962 2070 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:49:27.660995 kubelet[2070]: I0715 11:49:27.660973 2070 kubelet.go:352] "Adding apiserver pod source" Jul 15 11:49:27.660995 kubelet[2070]: I0715 11:49:27.660979 2070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:49:27.662381 kubelet[2070]: I0715 11:49:27.662371 2070 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:49:27.662683 kubelet[2070]: I0715 11:49:27.662674 2070 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:49:27.662953 kubelet[2070]: I0715 11:49:27.662945 2070 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:49:27.663008 kubelet[2070]: I0715 11:49:27.663000 2070 server.go:1287] "Started kubelet" Jul 15 11:49:27.670814 kubelet[2070]: I0715 11:49:27.670801 2070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:49:27.677515 kubelet[2070]: I0715 11:49:27.677486 2070 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:49:27.678209 kubelet[2070]: I0715 11:49:27.678188 2070 server.go:479] "Adding debug handlers to kubelet server" Jul 15 11:49:27.679331 kubelet[2070]: I0715 11:49:27.679318 2070 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:49:27.679631 kubelet[2070]: E0715 11:49:27.679619 2070 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:49:27.679834 kubelet[2070]: I0715 11:49:27.679825 2070 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:49:27.679950 kubelet[2070]: I0715 11:49:27.679943 2070 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:49:27.681309 kubelet[2070]: I0715 11:49:27.681260 2070 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:49:27.681413 kubelet[2070]: I0715 11:49:27.681401 2070 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:49:27.682430 kubelet[2070]: I0715 11:49:27.682415 2070 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:49:27.688740 kubelet[2070]: I0715 11:49:27.688653 2070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:49:27.689183 kubelet[2070]: I0715 11:49:27.689172 2070 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:49:27.689295 kubelet[2070]: I0715 11:49:27.689284 2070 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:49:27.691507 kubelet[2070]: I0715 11:49:27.689557 2070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:49:27.691507 kubelet[2070]: I0715 11:49:27.689572 2070 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 11:49:27.691507 kubelet[2070]: I0715 11:49:27.689583 2070 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:49:27.691507 kubelet[2070]: I0715 11:49:27.689588 2070 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 11:49:27.691507 kubelet[2070]: E0715 11:49:27.689621 2070 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:49:27.693471 kubelet[2070]: E0715 11:49:27.693453 2070 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:49:27.700876 kubelet[2070]: I0715 11:49:27.698200 2070 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:49:27.733353 kubelet[2070]: I0715 11:49:27.733339 2070 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:49:27.733484 kubelet[2070]: I0715 11:49:27.733475 2070 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:49:27.733536 kubelet[2070]: I0715 11:49:27.733529 2070 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:49:27.733751 kubelet[2070]: I0715 11:49:27.733743 2070 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:49:27.733805 kubelet[2070]: I0715 11:49:27.733790 2070 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:49:27.733877 kubelet[2070]: I0715 11:49:27.733870 2070 policy_none.go:49] "None policy: Start" Jul 15 11:49:27.733922 kubelet[2070]: I0715 11:49:27.733915 2070 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:49:27.733968 kubelet[2070]: I0715 11:49:27.733961 2070 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:49:27.734129 kubelet[2070]: I0715 11:49:27.734121 2070 state_mem.go:75] "Updated machine memory state" Jul 15 11:49:27.737269 kubelet[2070]: I0715 11:49:27.737258 2070 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:49:27.738697 kubelet[2070]: I0715 11:49:27.738689 2070 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:49:27.739505 kubelet[2070]: I0715 11:49:27.739486 2070 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:49:27.740969 kubelet[2070]: I0715 11:49:27.740962 2070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:49:27.741814 kubelet[2070]: E0715 11:49:27.741804 2070 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:49:27.790146 kubelet[2070]: I0715 11:49:27.790117 2070 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:27.794667 kubelet[2070]: I0715 11:49:27.794641 2070 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:49:27.795006 kubelet[2070]: I0715 11:49:27.794996 2070 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:27.841799 kubelet[2070]: I0715 11:49:27.841739 2070 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:49:27.847139 kubelet[2070]: I0715 11:49:27.847121 2070 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 11:49:27.847272 kubelet[2070]: I0715 11:49:27.847265 2070 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:49:27.981368 kubelet[2070]: I0715 11:49:27.981350 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:49:27.981513 kubelet[2070]: I0715 11:49:27.981503 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:27.981572 kubelet[2070]: I0715 11:49:27.981562 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:27.981630 kubelet[2070]: I0715 11:49:27.981620 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f6b21947194a21892ec8e5d1be2d1dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f6b21947194a21892ec8e5d1be2d1dd\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:27.981682 kubelet[2070]: I0715 11:49:27.981673 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:27.981734 kubelet[2070]: I0715 11:49:27.981725 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:27.981785 kubelet[2070]: I0715 11:49:27.981776 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:27.981837 kubelet[2070]: I0715 11:49:27.981828 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:27.981904 kubelet[2070]: I0715 11:49:27.981895 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:49:28.230769 sudo[2083]: pam_unix(sudo:session): session closed for user root Jul 15 11:49:28.666619 kubelet[2070]: I0715 11:49:28.666599 2070 apiserver.go:52] "Watching apiserver" Jul 15 11:49:28.679965 kubelet[2070]: I0715 11:49:28.679946 2070 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:49:28.716344 kubelet[2070]: I0715 11:49:28.716322 2070 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:28.719537 kubelet[2070]: E0715 11:49:28.719522 2070 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:49:28.736561 kubelet[2070]: I0715 11:49:28.736523 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.736511548 podStartE2EDuration="1.736511548s" podCreationTimestamp="2025-07-15 11:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:28.731983 +0000 UTC m=+1.161185887" watchObservedRunningTime="2025-07-15 11:49:28.736511548 +0000 UTC m=+1.165714430" Jul 15 11:49:28.742759 kubelet[2070]: I0715 11:49:28.742729 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.742715602 podStartE2EDuration="1.742715602s" podCreationTimestamp="2025-07-15 11:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:28.736391174 +0000 UTC m=+1.165594061" watchObservedRunningTime="2025-07-15 11:49:28.742715602 +0000 UTC m=+1.171918482" Jul 15 11:49:28.748228 kubelet[2070]: I0715 11:49:28.748184 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.748173714 podStartE2EDuration="1.748173714s" podCreationTimestamp="2025-07-15 11:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:28.743213345 +0000 UTC m=+1.172416232" watchObservedRunningTime="2025-07-15 11:49:28.748173714 +0000 UTC m=+1.177376593" Jul 15 11:49:29.895200 sudo[1437]: pam_unix(sudo:session): session closed for user root Jul 15 11:49:29.897126 sshd[1434]: pam_unix(sshd:session): session closed for user core Jul 15 11:49:29.898625 systemd[1]: sshd@4-139.178.70.105:22-147.75.109.163:42096.service: Deactivated successfully. Jul 15 11:49:29.899105 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:49:29.899190 systemd[1]: session-7.scope: Consumed 2.992s CPU time. Jul 15 11:49:29.899932 systemd-logind[1241]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:49:29.900553 systemd-logind[1241]: Removed session 7. Jul 15 11:49:31.923260 kubelet[2070]: I0715 11:49:31.923238 2070 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:49:31.923659 env[1247]: time="2025-07-15T11:49:31.923625166Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:49:31.923803 kubelet[2070]: I0715 11:49:31.923741 2070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:49:32.858596 systemd[1]: Created slice kubepods-besteffort-pode4d4309f_f501_42a2_a76f_56304057de2f.slice. Jul 15 11:49:32.869536 systemd[1]: Created slice kubepods-burstable-podd634bd25_2116_4c1c_a4e1_2c698567a88e.slice. Jul 15 11:49:32.910728 kubelet[2070]: I0715 11:49:32.910695 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4d4309f-f501-42a2-a76f-56304057de2f-xtables-lock\") pod \"kube-proxy-bvd5m\" (UID: \"e4d4309f-f501-42a2-a76f-56304057de2f\") " pod="kube-system/kube-proxy-bvd5m" Jul 15 11:49:32.910866 kubelet[2070]: I0715 11:49:32.910855 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-cgroup\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.910935 kubelet[2070]: I0715 11:49:32.910927 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4d4309f-f501-42a2-a76f-56304057de2f-lib-modules\") pod \"kube-proxy-bvd5m\" (UID: \"e4d4309f-f501-42a2-a76f-56304057de2f\") " pod="kube-system/kube-proxy-bvd5m" Jul 15 11:49:32.910996 kubelet[2070]: I0715 11:49:32.910981 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-run\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911065 kubelet[2070]: I0715 11:49:32.911048 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-hostproc\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911131 kubelet[2070]: I0715 11:49:32.911122 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-net\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911192 kubelet[2070]: I0715 11:49:32.911177 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-config-path\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911249 kubelet[2070]: I0715 11:49:32.911240 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cni-path\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911312 kubelet[2070]: I0715 11:49:32.911295 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d634bd25-2116-4c1c-a4e1-2c698567a88e-clustermesh-secrets\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911367 kubelet[2070]: I0715 11:49:32.911358 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-hubble-tls\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911432 kubelet[2070]: I0715 11:49:32.911421 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsct\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-kube-api-access-tjsct\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911496 kubelet[2070]: I0715 11:49:32.911488 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzss9\" (UniqueName: \"kubernetes.io/projected/e4d4309f-f501-42a2-a76f-56304057de2f-kube-api-access-pzss9\") pod \"kube-proxy-bvd5m\" (UID: \"e4d4309f-f501-42a2-a76f-56304057de2f\") " pod="kube-system/kube-proxy-bvd5m" Jul 15 11:49:32.911551 kubelet[2070]: I0715 11:49:32.911540 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4d4309f-f501-42a2-a76f-56304057de2f-kube-proxy\") pod \"kube-proxy-bvd5m\" (UID: \"e4d4309f-f501-42a2-a76f-56304057de2f\") " pod="kube-system/kube-proxy-bvd5m" Jul 15 11:49:32.911613 kubelet[2070]: I0715 11:49:32.911603 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-bpf-maps\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911677 kubelet[2070]: I0715 11:49:32.911659 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-lib-modules\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911730 kubelet[2070]: I0715 11:49:32.911721 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-xtables-lock\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911793 kubelet[2070]: I0715 11:49:32.911785 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-etc-cni-netd\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.911865 kubelet[2070]: I0715 11:49:32.911856 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-kernel\") pod \"cilium-j9t5h\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " pod="kube-system/cilium-j9t5h" Jul 15 11:49:32.942856 kubelet[2070]: I0715 11:49:32.942829 2070 status_manager.go:890] "Failed to get status for pod" podUID="0f94974b-ff32-49f9-94ba-cd4194e636d5" pod="kube-system/cilium-operator-6c4d7847fc-cwh45" err="pods \"cilium-operator-6c4d7847fc-cwh45\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 15 11:49:32.944176 systemd[1]: Created slice kubepods-besteffort-pod0f94974b_ff32_49f9_94ba_cd4194e636d5.slice. Jul 15 11:49:33.013375 kubelet[2070]: I0715 11:49:33.013349 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942k2\" (UniqueName: \"kubernetes.io/projected/0f94974b-ff32-49f9-94ba-cd4194e636d5-kube-api-access-942k2\") pod \"cilium-operator-6c4d7847fc-cwh45\" (UID: \"0f94974b-ff32-49f9-94ba-cd4194e636d5\") " pod="kube-system/cilium-operator-6c4d7847fc-cwh45" Jul 15 11:49:33.013776 kubelet[2070]: I0715 11:49:33.013765 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f94974b-ff32-49f9-94ba-cd4194e636d5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cwh45\" (UID: \"0f94974b-ff32-49f9-94ba-cd4194e636d5\") " pod="kube-system/cilium-operator-6c4d7847fc-cwh45" Jul 15 11:49:33.014084 kubelet[2070]: I0715 11:49:33.014072 2070 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:49:33.165380 env[1247]: time="2025-07-15T11:49:33.165351709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvd5m,Uid:e4d4309f-f501-42a2-a76f-56304057de2f,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:33.173067 env[1247]: time="2025-07-15T11:49:33.173029140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9t5h,Uid:d634bd25-2116-4c1c-a4e1-2c698567a88e,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:33.247036 env[1247]: time="2025-07-15T11:49:33.246876182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cwh45,Uid:0f94974b-ff32-49f9-94ba-cd4194e636d5,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:33.301004 env[1247]: time="2025-07-15T11:49:33.298256727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:33.301004 env[1247]: time="2025-07-15T11:49:33.298280243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:33.301004 env[1247]: time="2025-07-15T11:49:33.298287280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:33.301004 env[1247]: time="2025-07-15T11:49:33.300308358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c0da98b070d43ac94b0b5a289fbe55f9a6c703f595a005244a0b58036580fde pid=2151 runtime=io.containerd.runc.v2 Jul 15 11:49:33.307261 env[1247]: time="2025-07-15T11:49:33.307220097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:33.307393 env[1247]: time="2025-07-15T11:49:33.307378775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:33.307467 env[1247]: time="2025-07-15T11:49:33.307453013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:33.307711 env[1247]: time="2025-07-15T11:49:33.307690103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405 pid=2180 runtime=io.containerd.runc.v2 Jul 15 11:49:33.312716 env[1247]: time="2025-07-15T11:49:33.310958895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:33.312716 env[1247]: time="2025-07-15T11:49:33.310989273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:33.312716 env[1247]: time="2025-07-15T11:49:33.311008127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:33.312716 env[1247]: time="2025-07-15T11:49:33.311132751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305 pid=2179 runtime=io.containerd.runc.v2 Jul 15 11:49:33.321087 systemd[1]: Started cri-containerd-0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305.scope. Jul 15 11:49:33.324990 systemd[1]: Started cri-containerd-1c0da98b070d43ac94b0b5a289fbe55f9a6c703f595a005244a0b58036580fde.scope. Jul 15 11:49:33.337876 systemd[1]: Started cri-containerd-6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405.scope. Jul 15 11:49:33.356526 env[1247]: time="2025-07-15T11:49:33.356495203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9t5h,Uid:d634bd25-2116-4c1c-a4e1-2c698567a88e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\"" Jul 15 11:49:33.359935 env[1247]: time="2025-07-15T11:49:33.359907884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:49:33.379418 env[1247]: time="2025-07-15T11:49:33.379389942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvd5m,Uid:e4d4309f-f501-42a2-a76f-56304057de2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c0da98b070d43ac94b0b5a289fbe55f9a6c703f595a005244a0b58036580fde\"" Jul 15 11:49:33.385062 env[1247]: time="2025-07-15T11:49:33.385018210Z" level=info msg="CreateContainer within sandbox \"1c0da98b070d43ac94b0b5a289fbe55f9a6c703f595a005244a0b58036580fde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:49:33.398011 env[1247]: time="2025-07-15T11:49:33.397988033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cwh45,Uid:0f94974b-ff32-49f9-94ba-cd4194e636d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\"" Jul 15 11:49:33.412397 env[1247]: time="2025-07-15T11:49:33.412369788Z" level=info msg="CreateContainer within sandbox \"1c0da98b070d43ac94b0b5a289fbe55f9a6c703f595a005244a0b58036580fde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a638e1b05dd887ca12ca7020421e0b3f9a04c6b191882bef6c66e3be082d134d\"" Jul 15 11:49:33.413716 env[1247]: time="2025-07-15T11:49:33.413699964Z" level=info msg="StartContainer for \"a638e1b05dd887ca12ca7020421e0b3f9a04c6b191882bef6c66e3be082d134d\"" Jul 15 11:49:33.424233 systemd[1]: Started cri-containerd-a638e1b05dd887ca12ca7020421e0b3f9a04c6b191882bef6c66e3be082d134d.scope. Jul 15 11:49:33.451074 env[1247]: time="2025-07-15T11:49:33.451032831Z" level=info msg="StartContainer for \"a638e1b05dd887ca12ca7020421e0b3f9a04c6b191882bef6c66e3be082d134d\" returns successfully" Jul 15 11:49:33.773174 kubelet[2070]: I0715 11:49:33.773086 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bvd5m" podStartSLOduration=1.7730703490000002 podStartE2EDuration="1.773070349s" podCreationTimestamp="2025-07-15 11:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:33.742040713 +0000 UTC m=+6.171243608" watchObservedRunningTime="2025-07-15 11:49:33.773070349 +0000 UTC m=+6.202273234" Jul 15 11:49:38.379219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084436973.mount: Deactivated successfully. Jul 15 11:49:42.021252 env[1247]: time="2025-07-15T11:49:42.021213417Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:42.026359 env[1247]: time="2025-07-15T11:49:42.026331433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:42.028477 env[1247]: time="2025-07-15T11:49:42.028450887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:42.031585 env[1247]: time="2025-07-15T11:49:42.028746926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 11:49:42.031585 env[1247]: time="2025-07-15T11:49:42.030127995Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:49:42.031585 env[1247]: time="2025-07-15T11:49:42.030542237Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:49:42.048207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3305438244.mount: Deactivated successfully. Jul 15 11:49:42.053657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142303022.mount: Deactivated successfully. Jul 15 11:49:42.057885 env[1247]: time="2025-07-15T11:49:42.057850543Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\"" Jul 15 11:49:42.060220 env[1247]: time="2025-07-15T11:49:42.059249569Z" level=info msg="StartContainer for \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\"" Jul 15 11:49:42.079290 systemd[1]: Started cri-containerd-58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6.scope. Jul 15 11:49:42.121632 env[1247]: time="2025-07-15T11:49:42.121603477Z" level=info msg="StartContainer for \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\" returns successfully" Jul 15 11:49:42.175620 systemd[1]: cri-containerd-58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6.scope: Deactivated successfully. Jul 15 11:49:42.762390 env[1247]: time="2025-07-15T11:49:42.762346617Z" level=info msg="shim disconnected" id=58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6 Jul 15 11:49:42.762622 env[1247]: time="2025-07-15T11:49:42.762605980Z" level=warning msg="cleaning up after shim disconnected" id=58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6 namespace=k8s.io Jul 15 11:49:42.762692 env[1247]: time="2025-07-15T11:49:42.762677786Z" level=info msg="cleaning up dead shim" Jul 15 11:49:42.769866 env[1247]: time="2025-07-15T11:49:42.769827561Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:49:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2486 runtime=io.containerd.runc.v2\n" Jul 15 11:49:42.842213 env[1247]: time="2025-07-15T11:49:42.842189448Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:49:42.904778 env[1247]: time="2025-07-15T11:49:42.904746233Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\"" Jul 15 11:49:42.905412 env[1247]: time="2025-07-15T11:49:42.905379243Z" level=info msg="StartContainer for \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\"" Jul 15 11:49:42.921032 systemd[1]: Started cri-containerd-98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c.scope. Jul 15 11:49:42.947295 env[1247]: time="2025-07-15T11:49:42.947250677Z" level=info msg="StartContainer for \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\" returns successfully" Jul 15 11:49:42.970210 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:49:42.970394 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:49:42.970558 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:49:42.971866 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:49:42.974737 systemd[1]: cri-containerd-98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c.scope: Deactivated successfully. Jul 15 11:49:43.002867 env[1247]: time="2025-07-15T11:49:43.002840122Z" level=info msg="shim disconnected" id=98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c Jul 15 11:49:43.002994 env[1247]: time="2025-07-15T11:49:43.002981497Z" level=warning msg="cleaning up after shim disconnected" id=98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c namespace=k8s.io Jul 15 11:49:43.003041 env[1247]: time="2025-07-15T11:49:43.003031355Z" level=info msg="cleaning up dead shim" Jul 15 11:49:43.008776 env[1247]: time="2025-07-15T11:49:43.008742781Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:49:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2554 runtime=io.containerd.runc.v2\n" Jul 15 11:49:43.046216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6-rootfs.mount: Deactivated successfully. Jul 15 11:49:43.051742 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:49:43.693151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493751634.mount: Deactivated successfully. Jul 15 11:49:43.864134 env[1247]: time="2025-07-15T11:49:43.864102532Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:49:43.876050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594374928.mount: Deactivated successfully. Jul 15 11:49:43.881474 env[1247]: time="2025-07-15T11:49:43.881445059Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\"" Jul 15 11:49:43.882894 env[1247]: time="2025-07-15T11:49:43.882872665Z" level=info msg="StartContainer for \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\"" Jul 15 11:49:43.921598 systemd[1]: Started cri-containerd-319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da.scope. Jul 15 11:49:43.943934 env[1247]: time="2025-07-15T11:49:43.943878229Z" level=info msg="StartContainer for \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\" returns successfully" Jul 15 11:49:43.960913 systemd[1]: cri-containerd-319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da.scope: Deactivated successfully. Jul 15 11:49:44.157528 env[1247]: time="2025-07-15T11:49:44.157493428Z" level=info msg="shim disconnected" id=319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da Jul 15 11:49:44.157727 env[1247]: time="2025-07-15T11:49:44.157710916Z" level=warning msg="cleaning up after shim disconnected" id=319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da namespace=k8s.io Jul 15 11:49:44.157800 env[1247]: time="2025-07-15T11:49:44.157787394Z" level=info msg="cleaning up dead shim" Jul 15 11:49:44.180736 env[1247]: time="2025-07-15T11:49:44.180703035Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2611 runtime=io.containerd.runc.v2\n" Jul 15 11:49:44.367695 env[1247]: time="2025-07-15T11:49:44.367262138Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:44.369565 env[1247]: time="2025-07-15T11:49:44.369032429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:44.373387 env[1247]: time="2025-07-15T11:49:44.370848798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:49:44.373387 env[1247]: time="2025-07-15T11:49:44.371227192Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 11:49:44.377283 env[1247]: time="2025-07-15T11:49:44.377242420Z" level=info msg="CreateContainer within sandbox \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:49:44.396131 env[1247]: time="2025-07-15T11:49:44.396093851Z" level=info msg="CreateContainer within sandbox \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\"" Jul 15 11:49:44.396831 env[1247]: time="2025-07-15T11:49:44.396809693Z" level=info msg="StartContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\"" Jul 15 11:49:44.412530 systemd[1]: Started cri-containerd-281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b.scope. Jul 15 11:49:44.456556 env[1247]: time="2025-07-15T11:49:44.456524211Z" level=info msg="StartContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" returns successfully" Jul 15 11:49:44.869450 env[1247]: time="2025-07-15T11:49:44.869419492Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:49:44.878719 env[1247]: time="2025-07-15T11:49:44.878686226Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\"" Jul 15 11:49:44.879476 env[1247]: time="2025-07-15T11:49:44.878993395Z" level=info msg="StartContainer for \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\"" Jul 15 11:49:44.896431 systemd[1]: Started cri-containerd-96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452.scope. Jul 15 11:49:44.941433 env[1247]: time="2025-07-15T11:49:44.941390767Z" level=info msg="StartContainer for \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\" returns successfully" Jul 15 11:49:44.978597 systemd[1]: cri-containerd-96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452.scope: Deactivated successfully. Jul 15 11:49:45.011760 env[1247]: time="2025-07-15T11:49:45.011731382Z" level=info msg="shim disconnected" id=96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452 Jul 15 11:49:45.011906 env[1247]: time="2025-07-15T11:49:45.011892295Z" level=warning msg="cleaning up after shim disconnected" id=96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452 namespace=k8s.io Jul 15 11:49:45.011956 env[1247]: time="2025-07-15T11:49:45.011945943Z" level=info msg="cleaning up dead shim" Jul 15 11:49:45.021144 env[1247]: time="2025-07-15T11:49:45.021109677Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2702 runtime=io.containerd.runc.v2\n" Jul 15 11:49:45.040713 kubelet[2070]: I0715 11:49:45.037060 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cwh45" podStartSLOduration=2.057435048 podStartE2EDuration="13.034113302s" podCreationTimestamp="2025-07-15 11:49:32 +0000 UTC" firstStartedPulling="2025-07-15 11:49:33.398753156 +0000 UTC m=+5.827956034" lastFinishedPulling="2025-07-15 11:49:44.375431408 +0000 UTC m=+16.804634288" observedRunningTime="2025-07-15 11:49:44.976642225 +0000 UTC m=+17.405845111" watchObservedRunningTime="2025-07-15 11:49:45.034113302 +0000 UTC m=+17.463316183" Jul 15 11:49:45.894107 env[1247]: time="2025-07-15T11:49:45.893836915Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:49:45.914622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036448922.mount: Deactivated successfully. Jul 15 11:49:45.917197 env[1247]: time="2025-07-15T11:49:45.917138145Z" level=info msg="CreateContainer within sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\"" Jul 15 11:49:45.918932 env[1247]: time="2025-07-15T11:49:45.917762879Z" level=info msg="StartContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\"" Jul 15 11:49:45.936578 systemd[1]: Started cri-containerd-e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba.scope. Jul 15 11:49:45.964751 env[1247]: time="2025-07-15T11:49:45.964720619Z" level=info msg="StartContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" returns successfully" Jul 15 11:49:46.164828 kubelet[2070]: I0715 11:49:46.164752 2070 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 11:49:46.255576 systemd[1]: Created slice kubepods-burstable-pod725b69eb_2b8b_432d_80e7_b8eb09b40560.slice. Jul 15 11:49:46.262681 systemd[1]: Created slice kubepods-burstable-pod55727054_8893_4469_baf3_5c3c19978026.slice. Jul 15 11:49:46.370294 kubelet[2070]: I0715 11:49:46.370157 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55727054-8893-4469-baf3-5c3c19978026-config-volume\") pod \"coredns-668d6bf9bc-49jjx\" (UID: \"55727054-8893-4469-baf3-5c3c19978026\") " pod="kube-system/coredns-668d6bf9bc-49jjx" Jul 15 11:49:46.370294 kubelet[2070]: I0715 11:49:46.370205 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqpt\" (UniqueName: \"kubernetes.io/projected/725b69eb-2b8b-432d-80e7-b8eb09b40560-kube-api-access-rsqpt\") pod \"coredns-668d6bf9bc-pt7zs\" (UID: \"725b69eb-2b8b-432d-80e7-b8eb09b40560\") " pod="kube-system/coredns-668d6bf9bc-pt7zs" Jul 15 11:49:46.370294 kubelet[2070]: I0715 11:49:46.370225 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrcv\" (UniqueName: \"kubernetes.io/projected/55727054-8893-4469-baf3-5c3c19978026-kube-api-access-bjrcv\") pod \"coredns-668d6bf9bc-49jjx\" (UID: \"55727054-8893-4469-baf3-5c3c19978026\") " pod="kube-system/coredns-668d6bf9bc-49jjx" Jul 15 11:49:46.370294 kubelet[2070]: I0715 11:49:46.370243 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/725b69eb-2b8b-432d-80e7-b8eb09b40560-config-volume\") pod \"coredns-668d6bf9bc-pt7zs\" (UID: \"725b69eb-2b8b-432d-80e7-b8eb09b40560\") " pod="kube-system/coredns-668d6bf9bc-pt7zs" Jul 15 11:49:46.564674 env[1247]: time="2025-07-15T11:49:46.564598064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pt7zs,Uid:725b69eb-2b8b-432d-80e7-b8eb09b40560,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:46.566395 env[1247]: time="2025-07-15T11:49:46.566378519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-49jjx,Uid:55727054-8893-4469-baf3-5c3c19978026,Namespace:kube-system,Attempt:0,}" Jul 15 11:49:47.392085 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 15 11:49:47.767072 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 15 11:49:49.390236 systemd-networkd[1065]: cilium_host: Link UP Jul 15 11:49:49.393983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 15 11:49:49.394024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:49:49.393661 systemd-networkd[1065]: cilium_net: Link UP Jul 15 11:49:49.393760 systemd-networkd[1065]: cilium_net: Gained carrier Jul 15 11:49:49.393849 systemd-networkd[1065]: cilium_host: Gained carrier Jul 15 11:49:49.520639 systemd-networkd[1065]: cilium_vxlan: Link UP Jul 15 11:49:49.520644 systemd-networkd[1065]: cilium_vxlan: Gained carrier Jul 15 11:49:50.100219 systemd-networkd[1065]: cilium_net: Gained IPv6LL Jul 15 11:49:50.252090 kernel: NET: Registered PF_ALG protocol family Jul 15 11:49:50.293173 systemd-networkd[1065]: cilium_host: Gained IPv6LL Jul 15 11:49:50.816188 systemd-networkd[1065]: lxc_health: Link UP Jul 15 11:49:50.819737 systemd-networkd[1065]: lxc_health: Gained carrier Jul 15 11:49:50.820092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:49:51.184635 kubelet[2070]: I0715 11:49:51.184599 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j9t5h" podStartSLOduration=10.512716308 podStartE2EDuration="19.184588061s" podCreationTimestamp="2025-07-15 11:49:32 +0000 UTC" firstStartedPulling="2025-07-15 11:49:33.35769636 +0000 UTC m=+5.786899238" lastFinishedPulling="2025-07-15 11:49:42.029568116 +0000 UTC m=+14.458770991" observedRunningTime="2025-07-15 11:49:46.907435484 +0000 UTC m=+19.336638371" watchObservedRunningTime="2025-07-15 11:49:51.184588061 +0000 UTC m=+23.613790944" Jul 15 11:49:51.224419 systemd-networkd[1065]: lxcefc092f25952: Link UP Jul 15 11:49:51.233145 systemd-networkd[1065]: lxc6d7a8a218885: Link UP Jul 15 11:49:51.240075 kernel: eth0: renamed from tmpee300 Jul 15 11:49:51.244095 kernel: eth0: renamed from tmp3999d Jul 15 11:49:51.247652 systemd-networkd[1065]: lxc6d7a8a218885: Gained carrier Jul 15 11:49:51.248208 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6d7a8a218885: link becomes ready Jul 15 11:49:51.249767 systemd-networkd[1065]: lxcefc092f25952: Gained carrier Jul 15 11:49:51.250245 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcefc092f25952: link becomes ready Jul 15 11:49:51.508169 systemd-networkd[1065]: cilium_vxlan: Gained IPv6LL Jul 15 11:49:52.276182 systemd-networkd[1065]: lxc_health: Gained IPv6LL Jul 15 11:49:52.404224 systemd-networkd[1065]: lxcefc092f25952: Gained IPv6LL Jul 15 11:49:53.108180 systemd-networkd[1065]: lxc6d7a8a218885: Gained IPv6LL Jul 15 11:49:53.814243 env[1247]: time="2025-07-15T11:49:53.814191516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:53.814482 env[1247]: time="2025-07-15T11:49:53.814220542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:53.814482 env[1247]: time="2025-07-15T11:49:53.814227901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:53.814482 env[1247]: time="2025-07-15T11:49:53.814301190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf pid=3253 runtime=io.containerd.runc.v2 Jul 15 11:49:53.827760 systemd[1]: run-containerd-runc-k8s.io-3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf-runc.pQEu5K.mount: Deactivated successfully. Jul 15 11:49:53.830808 systemd[1]: Started cri-containerd-3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf.scope. Jul 15 11:49:53.843898 systemd-resolved[1205]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:49:53.866189 env[1247]: time="2025-07-15T11:49:53.845568582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:49:53.866189 env[1247]: time="2025-07-15T11:49:53.845598265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:49:53.866189 env[1247]: time="2025-07-15T11:49:53.845605445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:49:53.866189 env[1247]: time="2025-07-15T11:49:53.845692845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee300defb3b33c92e4f5adbcdb625e5560e06ea4e0f3d78f43660abca2ca9652 pid=3286 runtime=io.containerd.runc.v2 Jul 15 11:49:53.864097 systemd[1]: Started cri-containerd-ee300defb3b33c92e4f5adbcdb625e5560e06ea4e0f3d78f43660abca2ca9652.scope. Jul 15 11:49:53.878735 env[1247]: time="2025-07-15T11:49:53.878710829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pt7zs,Uid:725b69eb-2b8b-432d-80e7-b8eb09b40560,Namespace:kube-system,Attempt:0,} returns sandbox id \"3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf\"" Jul 15 11:49:53.882140 env[1247]: time="2025-07-15T11:49:53.882118890Z" level=info msg="CreateContainer within sandbox \"3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:49:53.889902 systemd-resolved[1205]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:49:53.912149 env[1247]: time="2025-07-15T11:49:53.912121409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-49jjx,Uid:55727054-8893-4469-baf3-5c3c19978026,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee300defb3b33c92e4f5adbcdb625e5560e06ea4e0f3d78f43660abca2ca9652\"" Jul 15 11:49:53.914502 env[1247]: time="2025-07-15T11:49:53.914478751Z" level=info msg="CreateContainer within sandbox \"ee300defb3b33c92e4f5adbcdb625e5560e06ea4e0f3d78f43660abca2ca9652\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:49:54.099686 env[1247]: time="2025-07-15T11:49:54.098952494Z" level=info msg="CreateContainer within sandbox \"3999d2bc73c4da4eec0e00a4c4604cc7436c5adcf0d9c251fe4fb2b229e531cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a4dc971394b08d362dd15604ac5a4012934a3b43c312410b27405e70fbdeb0f\"" Jul 15 11:49:54.099686 env[1247]: time="2025-07-15T11:49:54.099587698Z" level=info msg="StartContainer for \"2a4dc971394b08d362dd15604ac5a4012934a3b43c312410b27405e70fbdeb0f\"" Jul 15 11:49:54.100085 env[1247]: time="2025-07-15T11:49:54.100044755Z" level=info msg="CreateContainer within sandbox \"ee300defb3b33c92e4f5adbcdb625e5560e06ea4e0f3d78f43660abca2ca9652\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a4f6f4aed120f19913832f430e5e677bf465e220c819fc22d157998bc029645\"" Jul 15 11:49:54.100393 env[1247]: time="2025-07-15T11:49:54.100364665Z" level=info msg="StartContainer for \"5a4f6f4aed120f19913832f430e5e677bf465e220c819fc22d157998bc029645\"" Jul 15 11:49:54.114966 systemd[1]: Started cri-containerd-2a4dc971394b08d362dd15604ac5a4012934a3b43c312410b27405e70fbdeb0f.scope. Jul 15 11:49:54.121719 systemd[1]: Started cri-containerd-5a4f6f4aed120f19913832f430e5e677bf465e220c819fc22d157998bc029645.scope. Jul 15 11:49:54.242751 env[1247]: time="2025-07-15T11:49:54.242682781Z" level=info msg="StartContainer for \"2a4dc971394b08d362dd15604ac5a4012934a3b43c312410b27405e70fbdeb0f\" returns successfully" Jul 15 11:49:54.242865 env[1247]: time="2025-07-15T11:49:54.242713523Z" level=info msg="StartContainer for \"5a4f6f4aed120f19913832f430e5e677bf465e220c819fc22d157998bc029645\" returns successfully" Jul 15 11:49:54.926625 kubelet[2070]: I0715 11:49:54.926572 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pt7zs" podStartSLOduration=22.926557505 podStartE2EDuration="22.926557505s" podCreationTimestamp="2025-07-15 11:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:54.9181052 +0000 UTC m=+27.347308087" watchObservedRunningTime="2025-07-15 11:49:54.926557505 +0000 UTC m=+27.355760386" Jul 15 11:49:54.934487 kubelet[2070]: I0715 11:49:54.934441 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-49jjx" podStartSLOduration=22.934429711 podStartE2EDuration="22.934429711s" podCreationTimestamp="2025-07-15 11:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:49:54.933506994 +0000 UTC m=+27.362709882" watchObservedRunningTime="2025-07-15 11:49:54.934429711 +0000 UTC m=+27.363632593" Jul 15 11:50:38.683813 systemd[1]: Started sshd@5-139.178.70.105:22-147.75.109.163:51148.service. Jul 15 11:50:38.786711 sshd[3415]: Accepted publickey for core from 147.75.109.163 port 51148 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:38.788558 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:38.792968 systemd[1]: Started session-8.scope. Jul 15 11:50:38.793245 systemd-logind[1241]: New session 8 of user core. Jul 15 11:50:39.089259 sshd[3415]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:39.091991 systemd-logind[1241]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:50:39.092199 systemd[1]: sshd@5-139.178.70.105:22-147.75.109.163:51148.service: Deactivated successfully. Jul 15 11:50:39.092636 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:50:39.093200 systemd-logind[1241]: Removed session 8. Jul 15 11:50:44.092283 systemd[1]: Started sshd@6-139.178.70.105:22-147.75.109.163:51160.service. Jul 15 11:50:44.126168 sshd[3428]: Accepted publickey for core from 147.75.109.163 port 51160 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:44.129696 systemd[1]: Started session-9.scope. Jul 15 11:50:44.126591 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:44.130024 systemd-logind[1241]: New session 9 of user core. Jul 15 11:50:44.331904 sshd[3428]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:44.333977 systemd[1]: sshd@6-139.178.70.105:22-147.75.109.163:51160.service: Deactivated successfully. Jul 15 11:50:44.334536 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:50:44.334786 systemd-logind[1241]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:50:44.335267 systemd-logind[1241]: Removed session 9. Jul 15 11:50:49.334481 systemd[1]: Started sshd@7-139.178.70.105:22-147.75.109.163:60194.service. Jul 15 11:50:49.441769 sshd[3441]: Accepted publickey for core from 147.75.109.163 port 60194 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:49.442861 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:49.445561 systemd-logind[1241]: New session 10 of user core. Jul 15 11:50:49.446142 systemd[1]: Started session-10.scope. Jul 15 11:50:49.596306 sshd[3441]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:49.597991 systemd-logind[1241]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:50:49.598170 systemd[1]: sshd@7-139.178.70.105:22-147.75.109.163:60194.service: Deactivated successfully. Jul 15 11:50:49.598778 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:50:49.599362 systemd-logind[1241]: Removed session 10. Jul 15 11:50:54.598955 systemd[1]: Started sshd@8-139.178.70.105:22-147.75.109.163:60196.service. Jul 15 11:50:54.631447 sshd[3456]: Accepted publickey for core from 147.75.109.163 port 60196 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:54.632498 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:54.635043 systemd-logind[1241]: New session 11 of user core. Jul 15 11:50:54.635605 systemd[1]: Started session-11.scope. Jul 15 11:50:54.755312 sshd[3456]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:54.757720 systemd[1]: Started sshd@9-139.178.70.105:22-147.75.109.163:60208.service. Jul 15 11:50:54.765191 systemd[1]: sshd@8-139.178.70.105:22-147.75.109.163:60196.service: Deactivated successfully. Jul 15 11:50:54.765690 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:50:54.766179 systemd-logind[1241]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:50:54.766704 systemd-logind[1241]: Removed session 11. Jul 15 11:50:54.791875 sshd[3468]: Accepted publickey for core from 147.75.109.163 port 60208 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:54.792755 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:54.795289 systemd-logind[1241]: New session 12 of user core. Jul 15 11:50:54.795802 systemd[1]: Started session-12.scope. Jul 15 11:50:55.273271 sshd[3468]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:55.276940 systemd[1]: Started sshd@10-139.178.70.105:22-147.75.109.163:60216.service. Jul 15 11:50:55.277590 systemd[1]: sshd@9-139.178.70.105:22-147.75.109.163:60208.service: Deactivated successfully. Jul 15 11:50:55.278360 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:50:55.280367 systemd-logind[1241]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:50:55.283456 systemd-logind[1241]: Removed session 12. Jul 15 11:50:55.357234 sshd[3477]: Accepted publickey for core from 147.75.109.163 port 60216 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:50:55.358087 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:50:55.361152 systemd[1]: Started session-13.scope. Jul 15 11:50:55.361534 systemd-logind[1241]: New session 13 of user core. Jul 15 11:50:55.473861 sshd[3477]: pam_unix(sshd:session): session closed for user core Jul 15 11:50:55.475626 systemd[1]: sshd@10-139.178.70.105:22-147.75.109.163:60216.service: Deactivated successfully. Jul 15 11:50:55.476126 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:50:55.476679 systemd-logind[1241]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:50:55.477302 systemd-logind[1241]: Removed session 13. Jul 15 11:51:00.477345 systemd[1]: Started sshd@11-139.178.70.105:22-147.75.109.163:34326.service. Jul 15 11:51:00.509247 sshd[3489]: Accepted publickey for core from 147.75.109.163 port 34326 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:00.510392 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:00.513642 systemd[1]: Started session-14.scope. Jul 15 11:51:00.514535 systemd-logind[1241]: New session 14 of user core. Jul 15 11:51:00.613634 sshd[3489]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:00.615559 systemd[1]: sshd@11-139.178.70.105:22-147.75.109.163:34326.service: Deactivated successfully. Jul 15 11:51:00.616026 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:51:00.616567 systemd-logind[1241]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:51:00.617083 systemd-logind[1241]: Removed session 14. Jul 15 11:51:05.617926 systemd[1]: Started sshd@12-139.178.70.105:22-147.75.109.163:34334.service. Jul 15 11:51:05.653898 sshd[3502]: Accepted publickey for core from 147.75.109.163 port 34334 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:05.654956 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:05.659156 systemd[1]: Started session-15.scope. Jul 15 11:51:05.659991 systemd-logind[1241]: New session 15 of user core. Jul 15 11:51:05.752120 sshd[3502]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:05.753612 systemd[1]: Started sshd@13-139.178.70.105:22-147.75.109.163:34342.service. Jul 15 11:51:05.755273 systemd[1]: sshd@12-139.178.70.105:22-147.75.109.163:34334.service: Deactivated successfully. Jul 15 11:51:05.755791 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:51:05.756619 systemd-logind[1241]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:51:05.757173 systemd-logind[1241]: Removed session 15. Jul 15 11:51:05.787074 sshd[3513]: Accepted publickey for core from 147.75.109.163 port 34342 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:05.787980 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:05.791535 systemd[1]: Started session-16.scope. Jul 15 11:51:05.792487 systemd-logind[1241]: New session 16 of user core. Jul 15 11:51:06.328421 sshd[3513]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:06.330736 systemd[1]: Started sshd@14-139.178.70.105:22-147.75.109.163:34358.service. Jul 15 11:51:06.334710 systemd[1]: sshd@13-139.178.70.105:22-147.75.109.163:34342.service: Deactivated successfully. Jul 15 11:51:06.335425 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:51:06.335870 systemd-logind[1241]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:51:06.336456 systemd-logind[1241]: Removed session 16. Jul 15 11:51:06.371734 sshd[3523]: Accepted publickey for core from 147.75.109.163 port 34358 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:06.372630 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:06.376091 systemd-logind[1241]: New session 17 of user core. Jul 15 11:51:06.376125 systemd[1]: Started session-17.scope. Jul 15 11:51:07.196329 sshd[3523]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:07.198495 systemd[1]: Started sshd@15-139.178.70.105:22-147.75.109.163:34362.service. Jul 15 11:51:07.198833 systemd[1]: sshd@14-139.178.70.105:22-147.75.109.163:34358.service: Deactivated successfully. Jul 15 11:51:07.200018 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:51:07.200706 systemd-logind[1241]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:51:07.201604 systemd-logind[1241]: Removed session 17. Jul 15 11:51:07.265100 sshd[3538]: Accepted publickey for core from 147.75.109.163 port 34362 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:07.266248 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:07.268934 systemd-logind[1241]: New session 18 of user core. Jul 15 11:51:07.269717 systemd[1]: Started session-18.scope. Jul 15 11:51:07.482255 sshd[3538]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:07.483514 systemd[1]: Started sshd@16-139.178.70.105:22-147.75.109.163:34372.service. Jul 15 11:51:07.487244 systemd[1]: sshd@15-139.178.70.105:22-147.75.109.163:34362.service: Deactivated successfully. Jul 15 11:51:07.487699 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:51:07.489203 systemd-logind[1241]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:51:07.492581 systemd-logind[1241]: Removed session 18. Jul 15 11:51:07.519335 sshd[3549]: Accepted publickey for core from 147.75.109.163 port 34372 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:07.520620 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:07.523464 systemd-logind[1241]: New session 19 of user core. Jul 15 11:51:07.524447 systemd[1]: Started session-19.scope. Jul 15 11:51:07.621380 sshd[3549]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:07.626562 systemd[1]: sshd@16-139.178.70.105:22-147.75.109.163:34372.service: Deactivated successfully. Jul 15 11:51:07.627018 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:51:07.627456 systemd-logind[1241]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:51:07.628149 systemd-logind[1241]: Removed session 19. Jul 15 11:51:12.625283 systemd[1]: Started sshd@17-139.178.70.105:22-147.75.109.163:34286.service. Jul 15 11:51:12.899129 sshd[3564]: Accepted publickey for core from 147.75.109.163 port 34286 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:12.900737 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:12.905237 systemd[1]: Started session-20.scope. Jul 15 11:51:12.905714 systemd-logind[1241]: New session 20 of user core. Jul 15 11:51:13.021267 sshd[3564]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:13.026337 systemd[1]: sshd@17-139.178.70.105:22-147.75.109.163:34286.service: Deactivated successfully. Jul 15 11:51:13.027041 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:51:13.027771 systemd-logind[1241]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:51:13.028254 systemd-logind[1241]: Removed session 20. Jul 15 11:51:18.024429 systemd[1]: Started sshd@18-139.178.70.105:22-147.75.109.163:35752.service. Jul 15 11:51:18.140180 sshd[3575]: Accepted publickey for core from 147.75.109.163 port 35752 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:18.141301 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:18.146342 systemd-logind[1241]: New session 21 of user core. Jul 15 11:51:18.146841 systemd[1]: Started session-21.scope. Jul 15 11:51:18.243218 sshd[3575]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:18.244912 systemd[1]: sshd@18-139.178.70.105:22-147.75.109.163:35752.service: Deactivated successfully. Jul 15 11:51:18.245380 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:51:18.245773 systemd-logind[1241]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:51:18.246352 systemd-logind[1241]: Removed session 21. Jul 15 11:51:23.247042 systemd[1]: Started sshd@19-139.178.70.105:22-147.75.109.163:35766.service. Jul 15 11:51:23.280873 sshd[3587]: Accepted publickey for core from 147.75.109.163 port 35766 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:23.281830 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:23.284858 systemd[1]: Started session-22.scope. Jul 15 11:51:23.285656 systemd-logind[1241]: New session 22 of user core. Jul 15 11:51:23.422222 sshd[3587]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:23.423910 systemd[1]: sshd@19-139.178.70.105:22-147.75.109.163:35766.service: Deactivated successfully. Jul 15 11:51:23.424461 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:51:23.425425 systemd-logind[1241]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:51:23.425952 systemd-logind[1241]: Removed session 22. Jul 15 11:51:28.425312 systemd[1]: Started sshd@20-139.178.70.105:22-147.75.109.163:33500.service. Jul 15 11:51:28.458310 sshd[3601]: Accepted publickey for core from 147.75.109.163 port 33500 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:28.459589 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:28.462397 systemd-logind[1241]: New session 23 of user core. Jul 15 11:51:28.462916 systemd[1]: Started session-23.scope. Jul 15 11:51:28.575042 sshd[3601]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:28.577790 systemd[1]: Started sshd@21-139.178.70.105:22-147.75.109.163:33516.service. Jul 15 11:51:28.579879 systemd[1]: sshd@20-139.178.70.105:22-147.75.109.163:33500.service: Deactivated successfully. Jul 15 11:51:28.580334 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:51:28.580780 systemd-logind[1241]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:51:28.581570 systemd-logind[1241]: Removed session 23. Jul 15 11:51:28.917704 sshd[3612]: Accepted publickey for core from 147.75.109.163 port 33516 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:28.919167 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:28.927664 systemd-logind[1241]: New session 24 of user core. Jul 15 11:51:28.928271 systemd[1]: Started session-24.scope. Jul 15 11:51:32.953148 systemd[1]: run-containerd-runc-k8s.io-e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba-runc.r0XQ3p.mount: Deactivated successfully. Jul 15 11:51:33.098464 env[1247]: time="2025-07-15T11:51:33.098309543Z" level=info msg="StopContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" with timeout 30 (s)" Jul 15 11:51:33.098729 env[1247]: time="2025-07-15T11:51:33.098533280Z" level=info msg="Stop container \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" with signal terminated" Jul 15 11:51:33.112371 env[1247]: time="2025-07-15T11:51:33.112324615Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:51:33.115012 systemd[1]: cri-containerd-281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b.scope: Deactivated successfully. Jul 15 11:51:33.126206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b-rootfs.mount: Deactivated successfully. Jul 15 11:51:33.127957 env[1247]: time="2025-07-15T11:51:33.127940318Z" level=info msg="StopContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" with timeout 2 (s)" Jul 15 11:51:33.128283 env[1247]: time="2025-07-15T11:51:33.128265404Z" level=info msg="Stop container \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" with signal terminated" Jul 15 11:51:33.132919 systemd-networkd[1065]: lxc_health: Link DOWN Jul 15 11:51:33.132924 systemd-networkd[1065]: lxc_health: Lost carrier Jul 15 11:51:33.207303 systemd[1]: cri-containerd-e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba.scope: Deactivated successfully. Jul 15 11:51:33.207466 systemd[1]: cri-containerd-e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba.scope: Consumed 4.533s CPU time. Jul 15 11:51:33.220531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba-rootfs.mount: Deactivated successfully. Jul 15 11:51:33.227997 env[1247]: time="2025-07-15T11:51:33.227951222Z" level=info msg="shim disconnected" id=281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b Jul 15 11:51:33.228137 env[1247]: time="2025-07-15T11:51:33.228124682Z" level=warning msg="cleaning up after shim disconnected" id=281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b namespace=k8s.io Jul 15 11:51:33.228200 env[1247]: time="2025-07-15T11:51:33.228190187Z" level=info msg="cleaning up dead shim" Jul 15 11:51:33.233218 env[1247]: time="2025-07-15T11:51:33.233190074Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3679 runtime=io.containerd.runc.v2\n" Jul 15 11:51:33.242345 env[1247]: time="2025-07-15T11:51:33.242315271Z" level=info msg="StopContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" returns successfully" Jul 15 11:51:33.256442 env[1247]: time="2025-07-15T11:51:33.256419274Z" level=info msg="StopPodSandbox for \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\"" Jul 15 11:51:33.257036 env[1247]: time="2025-07-15T11:51:33.256567025Z" level=info msg="Container to stop \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.260897 systemd[1]: cri-containerd-6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405.scope: Deactivated successfully. Jul 15 11:51:33.272226 env[1247]: time="2025-07-15T11:51:33.272198530Z" level=info msg="shim disconnected" id=e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba Jul 15 11:51:33.272397 env[1247]: time="2025-07-15T11:51:33.272386548Z" level=warning msg="cleaning up after shim disconnected" id=e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba namespace=k8s.io Jul 15 11:51:33.272471 env[1247]: time="2025-07-15T11:51:33.272461008Z" level=info msg="cleaning up dead shim" Jul 15 11:51:33.280294 env[1247]: time="2025-07-15T11:51:33.280261550Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3707 runtime=io.containerd.runc.v2\n" Jul 15 11:51:33.292379 env[1247]: time="2025-07-15T11:51:33.292344826Z" level=info msg="StopContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" returns successfully" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292680559Z" level=info msg="StopPodSandbox for \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\"" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292719620Z" level=info msg="Container to stop \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292728734Z" level=info msg="Container to stop \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292735402Z" level=info msg="Container to stop \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292742394Z" level=info msg="Container to stop \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.293222 env[1247]: time="2025-07-15T11:51:33.292748082Z" level=info msg="Container to stop \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:33.295679 env[1247]: time="2025-07-15T11:51:33.295657054Z" level=info msg="shim disconnected" id=6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405 Jul 15 11:51:33.296148 env[1247]: time="2025-07-15T11:51:33.296134963Z" level=warning msg="cleaning up after shim disconnected" id=6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405 namespace=k8s.io Jul 15 11:51:33.296214 env[1247]: time="2025-07-15T11:51:33.296203339Z" level=info msg="cleaning up dead shim" Jul 15 11:51:33.297699 systemd[1]: cri-containerd-0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305.scope: Deactivated successfully. Jul 15 11:51:33.303787 env[1247]: time="2025-07-15T11:51:33.303758719Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3732 runtime=io.containerd.runc.v2\n" Jul 15 11:51:33.312205 env[1247]: time="2025-07-15T11:51:33.312179108Z" level=info msg="TearDown network for sandbox \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\" successfully" Jul 15 11:51:33.312205 env[1247]: time="2025-07-15T11:51:33.312200614Z" level=info msg="StopPodSandbox for \"6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405\" returns successfully" Jul 15 11:51:33.369983 env[1247]: time="2025-07-15T11:51:33.369946489Z" level=info msg="shim disconnected" id=0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305 Jul 15 11:51:33.369983 env[1247]: time="2025-07-15T11:51:33.369978969Z" level=warning msg="cleaning up after shim disconnected" id=0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305 namespace=k8s.io Jul 15 11:51:33.369983 env[1247]: time="2025-07-15T11:51:33.369985512Z" level=info msg="cleaning up dead shim" Jul 15 11:51:33.376287 env[1247]: time="2025-07-15T11:51:33.376263946Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Jul 15 11:51:33.379482 env[1247]: time="2025-07-15T11:51:33.379465557Z" level=info msg="TearDown network for sandbox \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" successfully" Jul 15 11:51:33.379551 env[1247]: time="2025-07-15T11:51:33.379538509Z" level=info msg="StopPodSandbox for \"0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305\" returns successfully" Jul 15 11:51:33.419513 kubelet[2070]: I0715 11:51:33.419483 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f94974b-ff32-49f9-94ba-cd4194e636d5-cilium-config-path\") pod \"0f94974b-ff32-49f9-94ba-cd4194e636d5\" (UID: \"0f94974b-ff32-49f9-94ba-cd4194e636d5\") " Jul 15 11:51:33.419763 kubelet[2070]: I0715 11:51:33.419523 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-942k2\" (UniqueName: \"kubernetes.io/projected/0f94974b-ff32-49f9-94ba-cd4194e636d5-kube-api-access-942k2\") pod \"0f94974b-ff32-49f9-94ba-cd4194e636d5\" (UID: \"0f94974b-ff32-49f9-94ba-cd4194e636d5\") " Jul 15 11:51:33.484507 kubelet[2070]: I0715 11:51:33.474338 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f94974b-ff32-49f9-94ba-cd4194e636d5-kube-api-access-942k2" (OuterVolumeSpecName: "kube-api-access-942k2") pod "0f94974b-ff32-49f9-94ba-cd4194e636d5" (UID: "0f94974b-ff32-49f9-94ba-cd4194e636d5"). InnerVolumeSpecName "kube-api-access-942k2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:51:33.490736 kubelet[2070]: I0715 11:51:33.474252 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f94974b-ff32-49f9-94ba-cd4194e636d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f94974b-ff32-49f9-94ba-cd4194e636d5" (UID: "0f94974b-ff32-49f9-94ba-cd4194e636d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:51:33.520590 kubelet[2070]: I0715 11:51:33.520566 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-config-path\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520745 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d634bd25-2116-4c1c-a4e1-2c698567a88e-clustermesh-secrets\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520763 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-xtables-lock\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520776 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-kernel\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520785 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-hostproc\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520795 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-etc-cni-netd\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526764 kubelet[2070]: I0715 11:51:33.520802 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-net\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520811 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-lib-modules\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520820 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-run\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520830 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjsct\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-kube-api-access-tjsct\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520840 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-cgroup\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520849 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cni-path\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.526900 kubelet[2070]: I0715 11:51:33.520858 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-hubble-tls\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.527025 kubelet[2070]: I0715 11:51:33.520866 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-bpf-maps\") pod \"d634bd25-2116-4c1c-a4e1-2c698567a88e\" (UID: \"d634bd25-2116-4c1c-a4e1-2c698567a88e\") " Jul 15 11:51:33.527025 kubelet[2070]: I0715 11:51:33.520898 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f94974b-ff32-49f9-94ba-cd4194e636d5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.527025 kubelet[2070]: I0715 11:51:33.520905 2070 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-942k2\" (UniqueName: \"kubernetes.io/projected/0f94974b-ff32-49f9-94ba-cd4194e636d5-kube-api-access-942k2\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.527025 kubelet[2070]: I0715 11:51:33.520935 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.527025 kubelet[2070]: I0715 11:51:33.521003 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.539810 kubelet[2070]: I0715 11:51:33.539781 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543509 kubelet[2070]: I0715 11:51:33.539819 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543509 kubelet[2070]: I0715 11:51:33.539900 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543509 kubelet[2070]: I0715 11:51:33.539912 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543509 kubelet[2070]: I0715 11:51:33.539923 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-hostproc" (OuterVolumeSpecName: "hostproc") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543509 kubelet[2070]: I0715 11:51:33.539932 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543631 kubelet[2070]: I0715 11:51:33.539943 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cni-path" (OuterVolumeSpecName: "cni-path") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543631 kubelet[2070]: I0715 11:51:33.539952 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:33.543631 kubelet[2070]: I0715 11:51:33.540963 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:51:33.552040 kubelet[2070]: I0715 11:51:33.552020 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d634bd25-2116-4c1c-a4e1-2c698567a88e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:51:33.564315 kubelet[2070]: I0715 11:51:33.564298 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:51:33.573320 kubelet[2070]: I0715 11:51:33.573303 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-kube-api-access-tjsct" (OuterVolumeSpecName: "kube-api-access-tjsct") pod "d634bd25-2116-4c1c-a4e1-2c698567a88e" (UID: "d634bd25-2116-4c1c-a4e1-2c698567a88e"). InnerVolumeSpecName "kube-api-access-tjsct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:51:33.621473 kubelet[2070]: I0715 11:51:33.621449 2070 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621596 kubelet[2070]: I0715 11:51:33.621587 2070 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621660 kubelet[2070]: I0715 11:51:33.621652 2070 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621710 kubelet[2070]: I0715 11:51:33.621701 2070 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621756 kubelet[2070]: I0715 11:51:33.621748 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621803 kubelet[2070]: I0715 11:51:33.621795 2070 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjsct\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-kube-api-access-tjsct\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621855 kubelet[2070]: I0715 11:51:33.621847 2070 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621902 kubelet[2070]: I0715 11:51:33.621895 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621948 kubelet[2070]: I0715 11:51:33.621941 2070 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.621992 kubelet[2070]: I0715 11:51:33.621985 2070 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d634bd25-2116-4c1c-a4e1-2c698567a88e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.622045 kubelet[2070]: I0715 11:51:33.622038 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d634bd25-2116-4c1c-a4e1-2c698567a88e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.622116 kubelet[2070]: I0715 11:51:33.622109 2070 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.622174 kubelet[2070]: I0715 11:51:33.622166 2070 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d634bd25-2116-4c1c-a4e1-2c698567a88e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.622220 kubelet[2070]: I0715 11:51:33.622213 2070 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d634bd25-2116-4c1c-a4e1-2c698567a88e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:33.713194 systemd[1]: Removed slice kubepods-burstable-podd634bd25_2116_4c1c_a4e1_2c698567a88e.slice. Jul 15 11:51:33.713252 systemd[1]: kubepods-burstable-podd634bd25_2116_4c1c_a4e1_2c698567a88e.slice: Consumed 4.606s CPU time. Jul 15 11:51:33.718181 systemd[1]: Removed slice kubepods-besteffort-pod0f94974b_ff32_49f9_94ba_cd4194e636d5.slice. Jul 15 11:51:33.950444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305-rootfs.mount: Deactivated successfully. Jul 15 11:51:33.950520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405-rootfs.mount: Deactivated successfully. Jul 15 11:51:33.950582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f9fdcdb0940b5f02e452259e67d0aae6d6cb3fa61b3c73d2da52431133e3405-shm.mount: Deactivated successfully. Jul 15 11:51:33.950633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0761e267cc5a314a6d754db30597da38408fbc053081a0fbf253afe242831305-shm.mount: Deactivated successfully. Jul 15 11:51:33.950676 systemd[1]: var-lib-kubelet-pods-0f94974b\x2dff32\x2d49f9\x2d94ba\x2dcd4194e636d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d942k2.mount: Deactivated successfully. Jul 15 11:51:33.950726 systemd[1]: var-lib-kubelet-pods-d634bd25\x2d2116\x2d4c1c\x2da4e1\x2d2c698567a88e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjsct.mount: Deactivated successfully. Jul 15 11:51:33.950773 systemd[1]: var-lib-kubelet-pods-d634bd25\x2d2116\x2d4c1c\x2da4e1\x2d2c698567a88e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:51:33.950820 systemd[1]: var-lib-kubelet-pods-d634bd25\x2d2116\x2d4c1c\x2da4e1\x2d2c698567a88e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:51:34.140692 kubelet[2070]: I0715 11:51:34.140666 2070 scope.go:117] "RemoveContainer" containerID="281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b" Jul 15 11:51:34.141807 env[1247]: time="2025-07-15T11:51:34.141780967Z" level=info msg="RemoveContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\"" Jul 15 11:51:34.161624 env[1247]: time="2025-07-15T11:51:34.161590119Z" level=info msg="RemoveContainer for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" returns successfully" Jul 15 11:51:34.161822 kubelet[2070]: I0715 11:51:34.161795 2070 scope.go:117] "RemoveContainer" containerID="281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b" Jul 15 11:51:34.162114 env[1247]: time="2025-07-15T11:51:34.161999995Z" level=error msg="ContainerStatus for \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\": not found" Jul 15 11:51:34.162168 kubelet[2070]: E0715 11:51:34.162155 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\": not found" containerID="281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b" Jul 15 11:51:34.162243 kubelet[2070]: I0715 11:51:34.162179 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b"} err="failed to get container status \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\": rpc error: code = NotFound desc = an error occurred when try to find container \"281b8fe9f8b07cac74b4af9a12509a10195deb2a9458d511bb068157bd51524b\": not found" Jul 15 11:51:34.162279 kubelet[2070]: I0715 11:51:34.162243 2070 scope.go:117] "RemoveContainer" containerID="e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba" Jul 15 11:51:34.163075 env[1247]: time="2025-07-15T11:51:34.162967175Z" level=info msg="RemoveContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\"" Jul 15 11:51:34.173424 env[1247]: time="2025-07-15T11:51:34.172946530Z" level=info msg="RemoveContainer for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" returns successfully" Jul 15 11:51:34.173548 kubelet[2070]: I0715 11:51:34.173149 2070 scope.go:117] "RemoveContainer" containerID="96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452" Jul 15 11:51:34.174922 env[1247]: time="2025-07-15T11:51:34.174638216Z" level=info msg="RemoveContainer for \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\"" Jul 15 11:51:34.181734 env[1247]: time="2025-07-15T11:51:34.181690919Z" level=info msg="RemoveContainer for \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\" returns successfully" Jul 15 11:51:34.182924 kubelet[2070]: I0715 11:51:34.182879 2070 scope.go:117] "RemoveContainer" containerID="319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da" Jul 15 11:51:34.185220 env[1247]: time="2025-07-15T11:51:34.185184910Z" level=info msg="RemoveContainer for \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\"" Jul 15 11:51:34.191306 env[1247]: time="2025-07-15T11:51:34.191276475Z" level=info msg="RemoveContainer for \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\" returns successfully" Jul 15 11:51:34.191515 kubelet[2070]: I0715 11:51:34.191503 2070 scope.go:117] "RemoveContainer" containerID="98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c" Jul 15 11:51:34.192410 env[1247]: time="2025-07-15T11:51:34.192219806Z" level=info msg="RemoveContainer for \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\"" Jul 15 11:51:34.197616 env[1247]: time="2025-07-15T11:51:34.197559716Z" level=info msg="RemoveContainer for \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\" returns successfully" Jul 15 11:51:34.197753 kubelet[2070]: I0715 11:51:34.197741 2070 scope.go:117] "RemoveContainer" containerID="58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6" Jul 15 11:51:34.198723 env[1247]: time="2025-07-15T11:51:34.198562314Z" level=info msg="RemoveContainer for \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\"" Jul 15 11:51:34.205598 env[1247]: time="2025-07-15T11:51:34.204480734Z" level=info msg="RemoveContainer for \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\" returns successfully" Jul 15 11:51:34.205795 kubelet[2070]: I0715 11:51:34.205784 2070 scope.go:117] "RemoveContainer" containerID="e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba" Jul 15 11:51:34.206027 env[1247]: time="2025-07-15T11:51:34.205995374Z" level=error msg="ContainerStatus for \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\": not found" Jul 15 11:51:34.206144 kubelet[2070]: E0715 11:51:34.206133 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\": not found" containerID="e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba" Jul 15 11:51:34.206209 kubelet[2070]: I0715 11:51:34.206194 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba"} err="failed to get container status \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8108cb04b34913c03ff94f53841a7eddc5a7dff7c264949496755870e17e1ba\": not found" Jul 15 11:51:34.206259 kubelet[2070]: I0715 11:51:34.206250 2070 scope.go:117] "RemoveContainer" containerID="96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452" Jul 15 11:51:34.206460 env[1247]: time="2025-07-15T11:51:34.206418144Z" level=error msg="ContainerStatus for \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\": not found" Jul 15 11:51:34.206542 kubelet[2070]: E0715 11:51:34.206523 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\": not found" containerID="96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452" Jul 15 11:51:34.206575 kubelet[2070]: I0715 11:51:34.206545 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452"} err="failed to get container status \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\": rpc error: code = NotFound desc = an error occurred when try to find container \"96f317a5042f8cb1780fd21068a3559c22582095cabd91059d77b95fafec5452\": not found" Jul 15 11:51:34.206575 kubelet[2070]: I0715 11:51:34.206558 2070 scope.go:117] "RemoveContainer" containerID="319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da" Jul 15 11:51:34.211161 kubelet[2070]: E0715 11:51:34.206766 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\": not found" containerID="319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da" Jul 15 11:51:34.211161 kubelet[2070]: I0715 11:51:34.206776 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da"} err="failed to get container status \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\": rpc error: code = NotFound desc = an error occurred when try to find container \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\": not found" Jul 15 11:51:34.211161 kubelet[2070]: I0715 11:51:34.206784 2070 scope.go:117] "RemoveContainer" containerID="98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c" Jul 15 11:51:34.211161 kubelet[2070]: E0715 11:51:34.206974 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\": not found" containerID="98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c" Jul 15 11:51:34.211161 kubelet[2070]: I0715 11:51:34.206983 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c"} err="failed to get container status \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\": rpc error: code = NotFound desc = an error occurred when try to find container \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\": not found" Jul 15 11:51:34.211161 kubelet[2070]: I0715 11:51:34.206991 2070 scope.go:117] "RemoveContainer" containerID="58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6" Jul 15 11:51:34.216794 env[1247]: time="2025-07-15T11:51:34.206698752Z" level=error msg="ContainerStatus for \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"319ea385661d17ccd7bed44d60de2a016c2353f0941ccfde8d334d201438e1da\": not found" Jul 15 11:51:34.216794 env[1247]: time="2025-07-15T11:51:34.206896107Z" level=error msg="ContainerStatus for \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98fcfd21baab9191ab3945fa31ebac9c835ca29024c5ca8c91b7349c1145831c\": not found" Jul 15 11:51:34.216794 env[1247]: time="2025-07-15T11:51:34.207095914Z" level=error msg="ContainerStatus for \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\": not found" Jul 15 11:51:34.216860 kubelet[2070]: E0715 11:51:34.207168 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\": not found" containerID="58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6" Jul 15 11:51:34.216860 kubelet[2070]: I0715 11:51:34.207179 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6"} err="failed to get container status \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"58188d6ea20347298c88da3586b7b2b8fa5f7dfda2f3cd03dd43e0e5704696d6\": not found" Jul 15 11:51:34.826953 systemd[1]: Started sshd@22-139.178.70.105:22-147.75.109.163:33524.service. Jul 15 11:51:34.831485 sshd[3612]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:34.894920 systemd[1]: sshd@21-139.178.70.105:22-147.75.109.163:33516.service: Deactivated successfully. Jul 15 11:51:34.895599 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:51:34.896087 systemd-logind[1241]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:51:34.896941 systemd-logind[1241]: Removed session 24. Jul 15 11:51:35.093178 sshd[3777]: Accepted publickey for core from 147.75.109.163 port 33524 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:35.093833 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:35.097260 systemd-logind[1241]: New session 25 of user core. Jul 15 11:51:35.097829 systemd[1]: Started session-25.scope. Jul 15 11:51:35.684533 sshd[3777]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:35.686631 systemd[1]: Started sshd@23-139.178.70.105:22-147.75.109.163:33536.service. Jul 15 11:51:35.689224 systemd[1]: sshd@22-139.178.70.105:22-147.75.109.163:33524.service: Deactivated successfully. Jul 15 11:51:35.689821 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:51:35.690962 systemd-logind[1241]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:51:35.692013 systemd-logind[1241]: Removed session 25. Jul 15 11:51:35.718678 sshd[3787]: Accepted publickey for core from 147.75.109.163 port 33536 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:35.719748 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:35.723501 systemd[1]: Started session-26.scope. Jul 15 11:51:35.725372 systemd-logind[1241]: New session 26 of user core. Jul 15 11:51:35.739669 kubelet[2070]: I0715 11:51:35.739311 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f94974b-ff32-49f9-94ba-cd4194e636d5" path="/var/lib/kubelet/pods/0f94974b-ff32-49f9-94ba-cd4194e636d5/volumes" Jul 15 11:51:35.744331 kubelet[2070]: I0715 11:51:35.744309 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d634bd25-2116-4c1c-a4e1-2c698567a88e" path="/var/lib/kubelet/pods/d634bd25-2116-4c1c-a4e1-2c698567a88e/volumes" Jul 15 11:51:35.779121 kubelet[2070]: I0715 11:51:35.779099 2070 memory_manager.go:355] "RemoveStaleState removing state" podUID="0f94974b-ff32-49f9-94ba-cd4194e636d5" containerName="cilium-operator" Jul 15 11:51:35.779236 kubelet[2070]: I0715 11:51:35.779226 2070 memory_manager.go:355] "RemoveStaleState removing state" podUID="d634bd25-2116-4c1c-a4e1-2c698567a88e" containerName="cilium-agent" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866717 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-config-path\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866741 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-net\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866753 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hostproc\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866763 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-kernel\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866772 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-ipsec-secrets\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866818 kubelet[2070]: I0715 11:51:35.866782 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cni-path\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866791 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-lib-modules\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866800 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-bpf-maps\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866807 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-cgroup\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866816 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-etc-cni-netd\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866825 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hubble-tls\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.866999 kubelet[2070]: I0715 11:51:35.866833 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kp9f\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-kube-api-access-4kp9f\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.872763 kubelet[2070]: I0715 11:51:35.866843 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-run\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.872763 kubelet[2070]: I0715 11:51:35.866851 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-xtables-lock\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.872763 kubelet[2070]: I0715 11:51:35.866860 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-clustermesh-secrets\") pod \"cilium-zsw5j\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " pod="kube-system/cilium-zsw5j" Jul 15 11:51:35.867992 systemd[1]: Created slice kubepods-burstable-podf7208d8b_26ca_49d9_94f8_56bd11331f6c.slice. Jul 15 11:51:36.177225 env[1247]: time="2025-07-15T11:51:36.177189405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zsw5j,Uid:f7208d8b-26ca-49d9-94f8-56bd11331f6c,Namespace:kube-system,Attempt:0,}" Jul 15 11:51:36.263376 env[1247]: time="2025-07-15T11:51:36.263320042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:51:36.263516 env[1247]: time="2025-07-15T11:51:36.263357315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:51:36.263586 env[1247]: time="2025-07-15T11:51:36.263501074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:51:36.264140 env[1247]: time="2025-07-15T11:51:36.263763188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436 pid=3807 runtime=io.containerd.runc.v2 Jul 15 11:51:36.273417 systemd[1]: Started cri-containerd-a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436.scope. Jul 15 11:51:36.291615 env[1247]: time="2025-07-15T11:51:36.291586373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zsw5j,Uid:f7208d8b-26ca-49d9-94f8-56bd11331f6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\"" Jul 15 11:51:36.294320 env[1247]: time="2025-07-15T11:51:36.294264263Z" level=info msg="CreateContainer within sandbox \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:51:36.385076 env[1247]: time="2025-07-15T11:51:36.384947855Z" level=info msg="CreateContainer within sandbox \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\"" Jul 15 11:51:36.386714 env[1247]: time="2025-07-15T11:51:36.385736484Z" level=info msg="StartContainer for \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\"" Jul 15 11:51:36.399659 systemd[1]: Started cri-containerd-254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5.scope. Jul 15 11:51:36.412218 systemd[1]: cri-containerd-254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5.scope: Deactivated successfully. Jul 15 11:51:36.515957 sshd[3787]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:36.518525 systemd[1]: Started sshd@24-139.178.70.105:22-147.75.109.163:33552.service. Jul 15 11:51:36.522714 systemd[1]: sshd@23-139.178.70.105:22-147.75.109.163:33536.service: Deactivated successfully. Jul 15 11:51:36.523179 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 11:51:36.523629 systemd-logind[1241]: Session 26 logged out. Waiting for processes to exit. Jul 15 11:51:36.524202 systemd-logind[1241]: Removed session 26. Jul 15 11:51:36.554094 sshd[3865]: Accepted publickey for core from 147.75.109.163 port 33552 ssh2: RSA SHA256:+CaGzVJdBS9axnUtiVJoq/0yBbuuMx53Aeb6f4RIUIo Jul 15 11:51:36.555381 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:51:36.560575 systemd-logind[1241]: New session 27 of user core. Jul 15 11:51:36.561097 systemd[1]: Started session-27.scope. Jul 15 11:51:36.608352 env[1247]: time="2025-07-15T11:51:36.608322113Z" level=info msg="shim disconnected" id=254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5 Jul 15 11:51:36.608560 env[1247]: time="2025-07-15T11:51:36.608547803Z" level=warning msg="cleaning up after shim disconnected" id=254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5 namespace=k8s.io Jul 15 11:51:36.608622 env[1247]: time="2025-07-15T11:51:36.608611975Z" level=info msg="cleaning up dead shim" Jul 15 11:51:36.617509 env[1247]: time="2025-07-15T11:51:36.617480310Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3870 runtime=io.containerd.runc.v2\ntime=\"2025-07-15T11:51:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 15 11:51:36.617827 env[1247]: time="2025-07-15T11:51:36.617750607Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Jul 15 11:51:36.619229 env[1247]: time="2025-07-15T11:51:36.618001827Z" level=error msg="Failed to pipe stderr of container \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\"" error="reading from a closed fifo" Jul 15 11:51:36.619229 env[1247]: time="2025-07-15T11:51:36.618355717Z" level=error msg="Failed to pipe stdout of container \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\"" error="reading from a closed fifo" Jul 15 11:51:36.623600 env[1247]: time="2025-07-15T11:51:36.623553564Z" level=error msg="StartContainer for \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 15 11:51:36.623932 kubelet[2070]: E0715 11:51:36.623840 2070 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5" Jul 15 11:51:36.634217 kubelet[2070]: E0715 11:51:36.634173 2070 kuberuntime_manager.go:1341] "Unhandled Error" err=< Jul 15 11:51:36.634217 kubelet[2070]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 15 11:51:36.634217 kubelet[2070]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 15 11:51:36.634217 kubelet[2070]: rm /hostbin/cilium-mount Jul 15 11:51:36.635081 kubelet[2070]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kp9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zsw5j_kube-system(f7208d8b-26ca-49d9-94f8-56bd11331f6c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 15 11:51:36.635081 kubelet[2070]: > logger="UnhandledError" Jul 15 11:51:36.635353 kubelet[2070]: E0715 11:51:36.635315 2070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zsw5j" podUID="f7208d8b-26ca-49d9-94f8-56bd11331f6c" Jul 15 11:51:37.083211 env[1247]: time="2025-07-15T11:51:37.083184347Z" level=info msg="StopPodSandbox for \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\"" Jul 15 11:51:37.083323 env[1247]: time="2025-07-15T11:51:37.083219053Z" level=info msg="Container to stop \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:51:37.084454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436-shm.mount: Deactivated successfully. Jul 15 11:51:37.089509 systemd[1]: cri-containerd-a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436.scope: Deactivated successfully. Jul 15 11:51:37.101489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436-rootfs.mount: Deactivated successfully. Jul 15 11:51:37.125646 env[1247]: time="2025-07-15T11:51:37.125611917Z" level=info msg="shim disconnected" id=a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436 Jul 15 11:51:37.125646 env[1247]: time="2025-07-15T11:51:37.125642303Z" level=warning msg="cleaning up after shim disconnected" id=a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436 namespace=k8s.io Jul 15 11:51:37.125646 env[1247]: time="2025-07-15T11:51:37.125648569Z" level=info msg="cleaning up dead shim" Jul 15 11:51:37.130435 env[1247]: time="2025-07-15T11:51:37.130411193Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" Jul 15 11:51:37.130708 env[1247]: time="2025-07-15T11:51:37.130692815Z" level=info msg="TearDown network for sandbox \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\" successfully" Jul 15 11:51:37.130775 env[1247]: time="2025-07-15T11:51:37.130759145Z" level=info msg="StopPodSandbox for \"a180f16874276e56411fc703b46584bea1fd494edcb8ed469031812d36da3436\" returns successfully" Jul 15 11:51:37.177516 kubelet[2070]: I0715 11:51:37.177421 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-xtables-lock\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.177764 kubelet[2070]: I0715 11:51:37.177528 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-ipsec-secrets\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.177764 kubelet[2070]: I0715 11:51:37.177539 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-kernel\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.177764 kubelet[2070]: I0715 11:51:37.177551 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-clustermesh-secrets\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.177478 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.177769 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.177778 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178027 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-run\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178043 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-config-path\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178070 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-etc-cni-netd\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178082 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-lib-modules\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178090 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-cgroup\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178100 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hostproc\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178109 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-bpf-maps\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178118 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-net\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178129 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hubble-tls\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178152 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cni-path\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178164 2070 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kp9f\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-kube-api-access-4kp9f\") pod \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\" (UID: \"f7208d8b-26ca-49d9-94f8-56bd11331f6c\") " Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178188 2070 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.179252 kubelet[2070]: I0715 11:51:37.178194 2070 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.179774 kubelet[2070]: I0715 11:51:37.178199 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.179774 kubelet[2070]: I0715 11:51:37.179142 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:51:37.179874 kubelet[2070]: I0715 11:51:37.179861 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.179953 kubelet[2070]: I0715 11:51:37.179940 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.180030 kubelet[2070]: I0715 11:51:37.180018 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.180117 kubelet[2070]: I0715 11:51:37.180105 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.180191 kubelet[2070]: I0715 11:51:37.180179 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.180271 kubelet[2070]: I0715 11:51:37.180258 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.181085 kubelet[2070]: I0715 11:51:37.180542 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:51:37.181085 kubelet[2070]: I0715 11:51:37.180691 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:51:37.183696 systemd[1]: var-lib-kubelet-pods-f7208d8b\x2d26ca\x2d49d9\x2d94f8\x2d56bd11331f6c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:51:37.185465 kubelet[2070]: I0715 11:51:37.185447 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:51:37.185732 systemd[1]: var-lib-kubelet-pods-f7208d8b\x2d26ca\x2d49d9\x2d94f8\x2d56bd11331f6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4kp9f.mount: Deactivated successfully. Jul 15 11:51:37.185807 systemd[1]: var-lib-kubelet-pods-f7208d8b\x2d26ca\x2d49d9\x2d94f8\x2d56bd11331f6c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:51:37.187689 kubelet[2070]: I0715 11:51:37.187674 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-kube-api-access-4kp9f" (OuterVolumeSpecName: "kube-api-access-4kp9f") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "kube-api-access-4kp9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:51:37.187848 kubelet[2070]: I0715 11:51:37.187820 2070 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7208d8b-26ca-49d9-94f8-56bd11331f6c" (UID: "f7208d8b-26ca-49d9-94f8-56bd11331f6c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:51:37.279375 kubelet[2070]: I0715 11:51:37.279335 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279375 kubelet[2070]: I0715 11:51:37.279370 2070 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7208d8b-26ca-49d9-94f8-56bd11331f6c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279375 kubelet[2070]: I0715 11:51:37.279376 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279375 kubelet[2070]: I0715 11:51:37.279382 2070 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279387 2070 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279391 2070 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279397 2070 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279401 2070 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279405 2070 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279410 2070 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279415 2070 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7208d8b-26ca-49d9-94f8-56bd11331f6c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.279550 kubelet[2070]: I0715 11:51:37.279419 2070 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kp9f\" (UniqueName: \"kubernetes.io/projected/f7208d8b-26ca-49d9-94f8-56bd11331f6c-kube-api-access-4kp9f\") on node \"localhost\" DevicePath \"\"" Jul 15 11:51:37.693386 systemd[1]: Removed slice kubepods-burstable-podf7208d8b_26ca_49d9_94f8_56bd11331f6c.slice. Jul 15 11:51:37.765772 kubelet[2070]: E0715 11:51:37.765721 2070 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:51:37.972694 systemd[1]: var-lib-kubelet-pods-f7208d8b\x2d26ca\x2d49d9\x2d94f8\x2d56bd11331f6c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:51:38.085268 kubelet[2070]: I0715 11:51:38.085244 2070 scope.go:117] "RemoveContainer" containerID="254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5" Jul 15 11:51:38.086948 env[1247]: time="2025-07-15T11:51:38.086667451Z" level=info msg="RemoveContainer for \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\"" Jul 15 11:51:38.114107 env[1247]: time="2025-07-15T11:51:38.114025538Z" level=info msg="RemoveContainer for \"254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5\" returns successfully" Jul 15 11:51:38.194984 kubelet[2070]: I0715 11:51:38.194956 2070 memory_manager.go:355] "RemoveStaleState removing state" podUID="f7208d8b-26ca-49d9-94f8-56bd11331f6c" containerName="mount-cgroup" Jul 15 11:51:38.198587 systemd[1]: Created slice kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice. Jul 15 11:51:38.285870 kubelet[2070]: I0715 11:51:38.285789 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09f0be0e-2632-49d7-a9b1-f1017a44dee2-cilium-ipsec-secrets\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286019 kubelet[2070]: I0715 11:51:38.286007 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-host-proc-sys-kernel\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286094 kubelet[2070]: I0715 11:51:38.286084 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09f0be0e-2632-49d7-a9b1-f1017a44dee2-hubble-tls\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286168 kubelet[2070]: I0715 11:51:38.286160 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-cni-path\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286265 kubelet[2070]: I0715 11:51:38.286256 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-etc-cni-netd\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286329 kubelet[2070]: I0715 11:51:38.286319 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09f0be0e-2632-49d7-a9b1-f1017a44dee2-clustermesh-secrets\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286402 kubelet[2070]: I0715 11:51:38.286384 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzb7\" (UniqueName: \"kubernetes.io/projected/09f0be0e-2632-49d7-a9b1-f1017a44dee2-kube-api-access-8gzb7\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286454 kubelet[2070]: I0715 11:51:38.286445 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-hostproc\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286518 kubelet[2070]: I0715 11:51:38.286509 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09f0be0e-2632-49d7-a9b1-f1017a44dee2-cilium-config-path\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286582 kubelet[2070]: I0715 11:51:38.286574 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-bpf-maps\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286667 kubelet[2070]: I0715 11:51:38.286626 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-lib-modules\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286739 kubelet[2070]: I0715 11:51:38.286731 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-cilium-run\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286800 kubelet[2070]: I0715 11:51:38.286784 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-xtables-lock\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286862 kubelet[2070]: I0715 11:51:38.286850 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-cilium-cgroup\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.286932 kubelet[2070]: I0715 11:51:38.286924 2070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09f0be0e-2632-49d7-a9b1-f1017a44dee2-host-proc-sys-net\") pod \"cilium-76ppg\" (UID: \"09f0be0e-2632-49d7-a9b1-f1017a44dee2\") " pod="kube-system/cilium-76ppg" Jul 15 11:51:38.501497 env[1247]: time="2025-07-15T11:51:38.501468467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76ppg,Uid:09f0be0e-2632-49d7-a9b1-f1017a44dee2,Namespace:kube-system,Attempt:0,}" Jul 15 11:51:38.574133 env[1247]: time="2025-07-15T11:51:38.574016797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:51:38.574133 env[1247]: time="2025-07-15T11:51:38.574050848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:51:38.574826 env[1247]: time="2025-07-15T11:51:38.574073819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:51:38.574826 env[1247]: time="2025-07-15T11:51:38.574462732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d pid=3936 runtime=io.containerd.runc.v2 Jul 15 11:51:38.587200 systemd[1]: Started cri-containerd-2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d.scope. Jul 15 11:51:38.605659 env[1247]: time="2025-07-15T11:51:38.605632896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76ppg,Uid:09f0be0e-2632-49d7-a9b1-f1017a44dee2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\"" Jul 15 11:51:38.608101 env[1247]: time="2025-07-15T11:51:38.608083253Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:51:38.736669 env[1247]: time="2025-07-15T11:51:38.736626932Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d\"" Jul 15 11:51:38.737122 env[1247]: time="2025-07-15T11:51:38.737043855Z" level=info msg="StartContainer for \"f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d\"" Jul 15 11:51:38.749555 systemd[1]: Started cri-containerd-f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d.scope. Jul 15 11:51:38.794131 env[1247]: time="2025-07-15T11:51:38.794096638Z" level=info msg="StartContainer for \"f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d\" returns successfully" Jul 15 11:51:38.978795 systemd[1]: cri-containerd-f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d.scope: Deactivated successfully. Jul 15 11:51:38.990384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d-rootfs.mount: Deactivated successfully. Jul 15 11:51:39.188409 env[1247]: time="2025-07-15T11:51:39.188380026Z" level=info msg="shim disconnected" id=f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d Jul 15 11:51:39.188774 env[1247]: time="2025-07-15T11:51:39.188761944Z" level=warning msg="cleaning up after shim disconnected" id=f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d namespace=k8s.io Jul 15 11:51:39.188830 env[1247]: time="2025-07-15T11:51:39.188814847Z" level=info msg="cleaning up dead shim" Jul 15 11:51:39.195731 env[1247]: time="2025-07-15T11:51:39.195704773Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Jul 15 11:51:39.691720 kubelet[2070]: I0715 11:51:39.691688 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7208d8b-26ca-49d9-94f8-56bd11331f6c" path="/var/lib/kubelet/pods/f7208d8b-26ca-49d9-94f8-56bd11331f6c/volumes" Jul 15 11:51:39.714093 kubelet[2070]: W0715 11:51:39.714068 2070 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7208d8b_26ca_49d9_94f8_56bd11331f6c.slice/cri-containerd-254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5.scope WatchSource:0}: container "254d47d8252dbb3a716a19773f3ccdffbaf9e244ee29b48203065d29730404b5" in namespace "k8s.io": not found Jul 15 11:51:40.092939 env[1247]: time="2025-07-15T11:51:40.092876537Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:51:40.105559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910796313.mount: Deactivated successfully. Jul 15 11:51:40.119398 env[1247]: time="2025-07-15T11:51:40.119366576Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f\"" Jul 15 11:51:40.119997 env[1247]: time="2025-07-15T11:51:40.119980203Z" level=info msg="StartContainer for \"2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f\"" Jul 15 11:51:40.142189 systemd[1]: Started cri-containerd-2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f.scope. Jul 15 11:51:40.192178 env[1247]: time="2025-07-15T11:51:40.192141403Z" level=info msg="StartContainer for \"2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f\" returns successfully" Jul 15 11:51:40.225338 kubelet[2070]: I0715 11:51:40.224566 2070 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:51:40Z","lastTransitionTime":"2025-07-15T11:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:51:40.233300 systemd[1]: cri-containerd-2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f.scope: Deactivated successfully. Jul 15 11:51:40.489343 env[1247]: time="2025-07-15T11:51:40.489302730Z" level=info msg="shim disconnected" id=2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f Jul 15 11:51:40.489343 env[1247]: time="2025-07-15T11:51:40.489334519Z" level=warning msg="cleaning up after shim disconnected" id=2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f namespace=k8s.io Jul 15 11:51:40.489343 env[1247]: time="2025-07-15T11:51:40.489342086Z" level=info msg="cleaning up dead shim" Jul 15 11:51:40.494343 env[1247]: time="2025-07-15T11:51:40.494314387Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4081 runtime=io.containerd.runc.v2\n" Jul 15 11:51:41.096170 env[1247]: time="2025-07-15T11:51:41.096134030Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:51:41.103544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f-rootfs.mount: Deactivated successfully. Jul 15 11:51:41.131814 env[1247]: time="2025-07-15T11:51:41.131775546Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7\"" Jul 15 11:51:41.132608 env[1247]: time="2025-07-15T11:51:41.132588061Z" level=info msg="StartContainer for \"15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7\"" Jul 15 11:51:41.145925 systemd[1]: Started cri-containerd-15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7.scope. Jul 15 11:51:41.171721 env[1247]: time="2025-07-15T11:51:41.171691073Z" level=info msg="StartContainer for \"15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7\" returns successfully" Jul 15 11:51:41.219090 systemd[1]: cri-containerd-15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7.scope: Deactivated successfully. Jul 15 11:51:41.240337 env[1247]: time="2025-07-15T11:51:41.240303549Z" level=info msg="shim disconnected" id=15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7 Jul 15 11:51:41.240676 env[1247]: time="2025-07-15T11:51:41.240664461Z" level=warning msg="cleaning up after shim disconnected" id=15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7 namespace=k8s.io Jul 15 11:51:41.240735 env[1247]: time="2025-07-15T11:51:41.240724461Z" level=info msg="cleaning up dead shim" Jul 15 11:51:41.246779 env[1247]: time="2025-07-15T11:51:41.246747877Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4139 runtime=io.containerd.runc.v2\n" Jul 15 11:51:42.098219 env[1247]: time="2025-07-15T11:51:42.098191578Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:51:42.103145 systemd[1]: run-containerd-runc-k8s.io-15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7-runc.ANUT5c.mount: Deactivated successfully. Jul 15 11:51:42.103206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7-rootfs.mount: Deactivated successfully. Jul 15 11:51:42.122031 env[1247]: time="2025-07-15T11:51:42.121999744Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1\"" Jul 15 11:51:42.122482 env[1247]: time="2025-07-15T11:51:42.122465199Z" level=info msg="StartContainer for \"b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1\"" Jul 15 11:51:42.135089 systemd[1]: Started cri-containerd-b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1.scope. Jul 15 11:51:42.152772 systemd[1]: cri-containerd-b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1.scope: Deactivated successfully. Jul 15 11:51:42.154238 env[1247]: time="2025-07-15T11:51:42.154194003Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice/cri-containerd-b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1.scope/memory.events\": no such file or directory" Jul 15 11:51:42.157691 env[1247]: time="2025-07-15T11:51:42.157668332Z" level=info msg="StartContainer for \"b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1\" returns successfully" Jul 15 11:51:42.177834 env[1247]: time="2025-07-15T11:51:42.177806013Z" level=info msg="shim disconnected" id=b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1 Jul 15 11:51:42.178031 env[1247]: time="2025-07-15T11:51:42.178017582Z" level=warning msg="cleaning up after shim disconnected" id=b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1 namespace=k8s.io Jul 15 11:51:42.178110 env[1247]: time="2025-07-15T11:51:42.178099652Z" level=info msg="cleaning up dead shim" Jul 15 11:51:42.183084 env[1247]: time="2025-07-15T11:51:42.183046087Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:51:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4195 runtime=io.containerd.runc.v2\n" Jul 15 11:51:42.766774 kubelet[2070]: E0715 11:51:42.766751 2070 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:51:42.820909 kubelet[2070]: W0715 11:51:42.820884 2070 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice/cri-containerd-f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d.scope WatchSource:0}: task f672fdfb15cf9fa5f2b1403ffdcb47a53d8b96d9be56f40779d1427636c6504d not found: not found Jul 15 11:51:43.100507 env[1247]: time="2025-07-15T11:51:43.100447518Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:51:43.103574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1-rootfs.mount: Deactivated successfully. Jul 15 11:51:43.131229 env[1247]: time="2025-07-15T11:51:43.131201047Z" level=info msg="CreateContainer within sandbox \"2b8e12c97073a305917c847c53d2349bd61769cc7fee326d50d4789dbc759e8d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79\"" Jul 15 11:51:43.131682 env[1247]: time="2025-07-15T11:51:43.131664982Z" level=info msg="StartContainer for \"d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79\"" Jul 15 11:51:43.144181 systemd[1]: Started cri-containerd-d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79.scope. Jul 15 11:51:43.165946 env[1247]: time="2025-07-15T11:51:43.165921686Z" level=info msg="StartContainer for \"d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79\" returns successfully" Jul 15 11:51:44.135973 kubelet[2070]: I0715 11:51:44.135909 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-76ppg" podStartSLOduration=6.135894358 podStartE2EDuration="6.135894358s" podCreationTimestamp="2025-07-15 11:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:51:44.135871307 +0000 UTC m=+136.565074202" watchObservedRunningTime="2025-07-15 11:51:44.135894358 +0000 UTC m=+136.565097246" Jul 15 11:51:44.894037 systemd[1]: run-containerd-runc-k8s.io-d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79-runc.0vx56Q.mount: Deactivated successfully. Jul 15 11:51:45.821079 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 11:51:45.926117 kubelet[2070]: W0715 11:51:45.926084 2070 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice/cri-containerd-2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f.scope WatchSource:0}: task 2a47073393c7fb49a200134ccf42a6e3c30552731e16858f73176d0913210f2f not found: not found Jul 15 11:51:47.054041 systemd[1]: run-containerd-runc-k8s.io-d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79-runc.VThxpj.mount: Deactivated successfully. Jul 15 11:51:48.692308 systemd-networkd[1065]: lxc_health: Link UP Jul 15 11:51:48.708311 systemd-networkd[1065]: lxc_health: Gained carrier Jul 15 11:51:48.713384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:51:49.033648 kubelet[2070]: W0715 11:51:49.033204 2070 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice/cri-containerd-15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7.scope WatchSource:0}: task 15077d356307072811b4568b56480055ea4b644a2f66340aa191a755348dccb7 not found: not found Jul 15 11:51:49.972185 systemd-networkd[1065]: lxc_health: Gained IPv6LL Jul 15 11:51:51.272020 systemd[1]: run-containerd-runc-k8s.io-d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79-runc.z1Z4gE.mount: Deactivated successfully. Jul 15 11:51:52.139549 kubelet[2070]: W0715 11:51:52.139523 2070 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f0be0e_2632_49d7_a9b1_f1017a44dee2.slice/cri-containerd-b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1.scope WatchSource:0}: task b679c4bb88b8037623215b54ed4162107f2534cb08747ebd98874f8ee6c5a0a1 not found: not found Jul 15 11:51:53.366856 systemd[1]: run-containerd-runc-k8s.io-d29241979b70a7f509a090e46eb2123cbfaf8aa2cc4e5f905cc098043ac0cd79-runc.lQ4YfC.mount: Deactivated successfully. Jul 15 11:51:55.481502 sshd[3865]: pam_unix(sshd:session): session closed for user core Jul 15 11:51:55.483817 systemd-logind[1241]: Session 27 logged out. Waiting for processes to exit. Jul 15 11:51:55.484021 systemd[1]: sshd@24-139.178.70.105:22-147.75.109.163:33552.service: Deactivated successfully. Jul 15 11:51:55.484465 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 11:51:55.485363 systemd-logind[1241]: Removed session 27.