Mar 20 21:32:09.899137 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 20 19:36:47 -00 2025 Mar 20 21:32:09.899163 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:32:09.899175 kernel: BIOS-provided physical RAM map: Mar 20 21:32:09.899182 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 20 21:32:09.899188 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 20 21:32:09.899195 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 20 21:32:09.899202 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 20 21:32:09.899209 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 20 21:32:09.899216 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 20 21:32:09.899222 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 20 21:32:09.899229 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 20 21:32:09.899238 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 20 21:32:09.899245 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 20 21:32:09.899252 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 20 21:32:09.899260 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 20 21:32:09.899267 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 20 21:32:09.899277 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 20 21:32:09.899284 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 20 21:32:09.899291 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 20 21:32:09.899298 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 20 21:32:09.899305 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 20 21:32:09.899312 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 20 21:32:09.899327 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 20 21:32:09.899334 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:32:09.899342 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 20 21:32:09.899349 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:32:09.899356 kernel: NX (Execute Disable) protection: active Mar 20 21:32:09.899365 kernel: APIC: Static calls initialized Mar 20 21:32:09.899372 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 20 21:32:09.899380 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 20 21:32:09.899387 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 20 21:32:09.899394 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 20 21:32:09.899401 kernel: extended physical RAM map: Mar 20 21:32:09.899408 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 20 21:32:09.899415 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 20 21:32:09.899422 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 20 21:32:09.899430 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 20 21:32:09.900659 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 20 21:32:09.900675 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 20 21:32:09.900699 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 20 21:32:09.900720 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 20 21:32:09.900735 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 20 21:32:09.900742 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 20 21:32:09.900750 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 20 21:32:09.900757 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 20 21:32:09.900767 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 20 21:32:09.900775 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 20 21:32:09.900782 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 20 21:32:09.900790 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 20 21:32:09.900797 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 20 21:32:09.900805 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 20 21:32:09.900812 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 20 21:32:09.900819 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 20 21:32:09.900827 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 20 21:32:09.900834 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 20 21:32:09.900844 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 20 21:32:09.900851 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 20 21:32:09.900859 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 20 21:32:09.900866 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 20 21:32:09.900874 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 20 21:32:09.900881 kernel: efi: EFI v2.7 by EDK II Mar 20 21:32:09.900889 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 20 21:32:09.900896 kernel: random: crng init done Mar 20 21:32:09.900904 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 20 21:32:09.900911 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 20 21:32:09.900919 kernel: secureboot: Secure boot disabled Mar 20 21:32:09.900929 kernel: SMBIOS 2.8 present. Mar 20 21:32:09.900936 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 20 21:32:09.900943 kernel: Hypervisor detected: KVM Mar 20 21:32:09.900951 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 20 21:32:09.900958 kernel: kvm-clock: using sched offset of 2974149849 cycles Mar 20 21:32:09.900966 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 20 21:32:09.900982 kernel: tsc: Detected 2794.746 MHz processor Mar 20 21:32:09.900992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 20 21:32:09.901000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 20 21:32:09.901014 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 20 21:32:09.901026 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 20 21:32:09.901033 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 20 21:32:09.901041 kernel: Using GB pages for direct mapping Mar 20 21:32:09.901049 kernel: ACPI: Early table checksum verification disabled Mar 20 21:32:09.901056 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 20 21:32:09.901064 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:32:09.901072 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901080 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901087 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 20 21:32:09.901097 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901105 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901112 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901120 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:32:09.901128 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 20 21:32:09.901135 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 20 21:32:09.901143 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 20 21:32:09.901151 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 20 21:32:09.901158 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 20 21:32:09.901169 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 20 21:32:09.901176 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 20 21:32:09.901184 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 20 21:32:09.901191 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 20 21:32:09.901199 kernel: No NUMA configuration found Mar 20 21:32:09.901206 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 20 21:32:09.901214 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 20 21:32:09.901221 kernel: Zone ranges: Mar 20 21:32:09.901229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 20 21:32:09.901239 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 20 21:32:09.901246 kernel: Normal empty Mar 20 21:32:09.901254 kernel: Movable zone start for each node Mar 20 21:32:09.901262 kernel: Early memory node ranges Mar 20 21:32:09.901269 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 20 21:32:09.901277 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 20 21:32:09.901284 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 20 21:32:09.901292 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 20 21:32:09.901299 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 20 21:32:09.901306 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 20 21:32:09.901316 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 20 21:32:09.901331 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 20 21:32:09.901339 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 20 21:32:09.901346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:32:09.901354 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 20 21:32:09.901370 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 20 21:32:09.901380 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 21:32:09.901387 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 20 21:32:09.901395 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 20 21:32:09.901403 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 20 21:32:09.901411 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 20 21:32:09.901419 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 20 21:32:09.901429 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 20 21:32:09.901436 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 20 21:32:09.901444 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 20 21:32:09.901452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 20 21:32:09.901460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 20 21:32:09.901470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 20 21:32:09.901478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 20 21:32:09.901486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 20 21:32:09.901494 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 20 21:32:09.901502 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 20 21:32:09.901509 kernel: TSC deadline timer available Mar 20 21:32:09.901517 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 20 21:32:09.901525 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 20 21:32:09.901533 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 20 21:32:09.901543 kernel: kvm-guest: setup PV sched yield Mar 20 21:32:09.901551 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 20 21:32:09.901559 kernel: Booting paravirtualized kernel on KVM Mar 20 21:32:09.901567 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 20 21:32:09.901575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 20 21:32:09.901594 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 20 21:32:09.901602 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 20 21:32:09.901610 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 20 21:32:09.901617 kernel: kvm-guest: PV spinlocks enabled Mar 20 21:32:09.901628 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 20 21:32:09.901637 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:32:09.901645 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:32:09.901653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:32:09.901661 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:32:09.901669 kernel: Fallback order for Node 0: 0 Mar 20 21:32:09.901677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 20 21:32:09.901685 kernel: Policy zone: DMA32 Mar 20 21:32:09.901695 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:32:09.901704 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 179872K reserved, 0K cma-reserved) Mar 20 21:32:09.901712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:32:09.901720 kernel: ftrace: allocating 37985 entries in 149 pages Mar 20 21:32:09.901727 kernel: ftrace: allocated 149 pages with 4 groups Mar 20 21:32:09.901735 kernel: Dynamic Preempt: voluntary Mar 20 21:32:09.901743 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:32:09.901751 kernel: rcu: RCU event tracing is enabled. Mar 20 21:32:09.901760 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:32:09.901770 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:32:09.901778 kernel: Rude variant of Tasks RCU enabled. Mar 20 21:32:09.901786 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:32:09.901794 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:32:09.901802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:32:09.901810 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 20 21:32:09.901818 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:32:09.901826 kernel: Console: colour dummy device 80x25 Mar 20 21:32:09.901833 kernel: printk: console [ttyS0] enabled Mar 20 21:32:09.901844 kernel: ACPI: Core revision 20230628 Mar 20 21:32:09.901852 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 20 21:32:09.901860 kernel: APIC: Switch to symmetric I/O mode setup Mar 20 21:32:09.901867 kernel: x2apic enabled Mar 20 21:32:09.901875 kernel: APIC: Switched APIC routing to: physical x2apic Mar 20 21:32:09.901883 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 20 21:32:09.901891 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 20 21:32:09.901899 kernel: kvm-guest: setup PV IPIs Mar 20 21:32:09.901907 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 20 21:32:09.901917 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 20 21:32:09.901925 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Mar 20 21:32:09.901933 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 20 21:32:09.901940 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 20 21:32:09.901948 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 20 21:32:09.901956 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 20 21:32:09.901964 kernel: Spectre V2 : Mitigation: Retpolines Mar 20 21:32:09.901972 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 20 21:32:09.901980 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 20 21:32:09.901990 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 20 21:32:09.901998 kernel: RETBleed: Mitigation: untrained return thunk Mar 20 21:32:09.902006 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 20 21:32:09.902013 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 20 21:32:09.902021 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 20 21:32:09.902030 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 20 21:32:09.902038 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 20 21:32:09.902045 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 20 21:32:09.902053 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 20 21:32:09.902063 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 20 21:32:09.902071 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 20 21:32:09.902079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 20 21:32:09.902087 kernel: Freeing SMP alternatives memory: 32K Mar 20 21:32:09.902095 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:32:09.902102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:32:09.902110 kernel: landlock: Up and running. Mar 20 21:32:09.902118 kernel: SELinux: Initializing. Mar 20 21:32:09.902126 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:32:09.902136 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:32:09.902144 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 20 21:32:09.902152 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:32:09.902160 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:32:09.902168 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:32:09.902176 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 20 21:32:09.902184 kernel: ... version: 0 Mar 20 21:32:09.902191 kernel: ... bit width: 48 Mar 20 21:32:09.902217 kernel: ... generic registers: 6 Mar 20 21:32:09.902239 kernel: ... value mask: 0000ffffffffffff Mar 20 21:32:09.902255 kernel: ... max period: 00007fffffffffff Mar 20 21:32:09.902263 kernel: ... fixed-purpose events: 0 Mar 20 21:32:09.902271 kernel: ... event mask: 000000000000003f Mar 20 21:32:09.902279 kernel: signal: max sigframe size: 1776 Mar 20 21:32:09.902287 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:32:09.902295 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:32:09.902303 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:32:09.902311 kernel: smpboot: x86: Booting SMP configuration: Mar 20 21:32:09.902328 kernel: .... node #0, CPUs: #1 #2 #3 Mar 20 21:32:09.902339 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:32:09.902347 kernel: smpboot: Max logical packages: 1 Mar 20 21:32:09.902355 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Mar 20 21:32:09.902363 kernel: devtmpfs: initialized Mar 20 21:32:09.902371 kernel: x86/mm: Memory block size: 128MB Mar 20 21:32:09.902379 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 20 21:32:09.902387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 20 21:32:09.902395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 20 21:32:09.902405 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 20 21:32:09.902413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 20 21:32:09.902421 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 20 21:32:09.902429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:32:09.902437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:32:09.902445 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:32:09.902453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:32:09.902461 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:32:09.902471 kernel: audit: type=2000 audit(1742506329.347:1): state=initialized audit_enabled=0 res=1 Mar 20 21:32:09.902479 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:32:09.902487 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 20 21:32:09.902495 kernel: cpuidle: using governor menu Mar 20 21:32:09.902502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:32:09.902510 kernel: dca service started, version 1.12.1 Mar 20 21:32:09.902518 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 20 21:32:09.902526 kernel: PCI: Using configuration type 1 for base access Mar 20 21:32:09.902534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 20 21:32:09.902544 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:32:09.902552 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:32:09.902560 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:32:09.902568 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:32:09.902576 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:32:09.902593 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:32:09.902601 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:32:09.902609 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:32:09.902619 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:32:09.902630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 20 21:32:09.902638 kernel: ACPI: Interpreter enabled Mar 20 21:32:09.902648 kernel: ACPI: PM: (supports S0 S3 S5) Mar 20 21:32:09.902656 kernel: ACPI: Using IOAPIC for interrupt routing Mar 20 21:32:09.902666 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 20 21:32:09.902674 kernel: PCI: Using E820 reservations for host bridge windows Mar 20 21:32:09.902682 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 20 21:32:09.902690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:32:09.902871 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:32:09.903013 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 20 21:32:09.903165 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 20 21:32:09.903178 kernel: PCI host bridge to bus 0000:00 Mar 20 21:32:09.903336 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 20 21:32:09.903454 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 20 21:32:09.903571 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 20 21:32:09.903704 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 20 21:32:09.903817 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 20 21:32:09.903931 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 20 21:32:09.904045 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:32:09.904204 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 20 21:32:09.904347 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 20 21:32:09.904473 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 20 21:32:09.904624 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 20 21:32:09.904750 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 20 21:32:09.904875 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 20 21:32:09.905000 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 20 21:32:09.905133 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:32:09.905258 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 20 21:32:09.905393 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 20 21:32:09.905525 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 20 21:32:09.905687 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 20 21:32:09.905816 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 20 21:32:09.905943 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 20 21:32:09.906068 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 20 21:32:09.906213 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 20 21:32:09.906354 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 20 21:32:09.906481 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 20 21:32:09.906642 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 20 21:32:09.906773 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 20 21:32:09.906906 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 20 21:32:09.907032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 20 21:32:09.907164 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 20 21:32:09.907294 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 20 21:32:09.907429 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 20 21:32:09.907562 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 20 21:32:09.907721 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 20 21:32:09.907732 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 20 21:32:09.907741 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 20 21:32:09.907749 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 20 21:32:09.907757 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 20 21:32:09.907769 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 20 21:32:09.907777 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 20 21:32:09.907786 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 20 21:32:09.907794 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 20 21:32:09.907802 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 20 21:32:09.907810 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 20 21:32:09.907818 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 20 21:32:09.907826 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 20 21:32:09.907833 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 20 21:32:09.907844 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 20 21:32:09.907852 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 20 21:32:09.907860 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 20 21:32:09.907868 kernel: iommu: Default domain type: Translated Mar 20 21:32:09.907876 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 20 21:32:09.907884 kernel: efivars: Registered efivars operations Mar 20 21:32:09.907892 kernel: PCI: Using ACPI for IRQ routing Mar 20 21:32:09.907900 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 20 21:32:09.907908 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 20 21:32:09.907919 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 20 21:32:09.907927 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 20 21:32:09.907934 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 20 21:32:09.907942 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 20 21:32:09.907951 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 20 21:32:09.907958 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 20 21:32:09.907966 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 20 21:32:09.908092 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 20 21:32:09.908221 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 20 21:32:09.908356 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 20 21:32:09.908368 kernel: vgaarb: loaded Mar 20 21:32:09.908376 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 20 21:32:09.908384 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 20 21:32:09.908392 kernel: clocksource: Switched to clocksource kvm-clock Mar 20 21:32:09.908400 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:32:09.908408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:32:09.908416 kernel: pnp: PnP ACPI init Mar 20 21:32:09.908562 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 20 21:32:09.908574 kernel: pnp: PnP ACPI: found 6 devices Mar 20 21:32:09.908599 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 20 21:32:09.908607 kernel: NET: Registered PF_INET protocol family Mar 20 21:32:09.908633 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:32:09.908644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:32:09.908652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:32:09.908660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:32:09.908671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:32:09.908682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:32:09.908690 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:32:09.908698 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:32:09.908707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:32:09.908715 kernel: NET: Registered PF_XDP protocol family Mar 20 21:32:09.908846 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 20 21:32:09.908973 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 20 21:32:09.909093 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 20 21:32:09.909208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 20 21:32:09.909331 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 20 21:32:09.909446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 20 21:32:09.909560 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 20 21:32:09.909688 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 20 21:32:09.909700 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:32:09.909709 kernel: Initialise system trusted keyrings Mar 20 21:32:09.909721 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:32:09.909729 kernel: Key type asymmetric registered Mar 20 21:32:09.909737 kernel: Asymmetric key parser 'x509' registered Mar 20 21:32:09.909746 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 20 21:32:09.909754 kernel: io scheduler mq-deadline registered Mar 20 21:32:09.909762 kernel: io scheduler kyber registered Mar 20 21:32:09.909770 kernel: io scheduler bfq registered Mar 20 21:32:09.909779 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 20 21:32:09.909787 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 20 21:32:09.909798 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 20 21:32:09.909809 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 20 21:32:09.909817 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:32:09.909825 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 20 21:32:09.909834 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 20 21:32:09.909843 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 20 21:32:09.909853 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 20 21:32:09.909997 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 20 21:32:09.910010 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 20 21:32:09.910127 kernel: rtc_cmos 00:04: registered as rtc0 Mar 20 21:32:09.910245 kernel: rtc_cmos 00:04: setting system clock to 2025-03-20T21:32:09 UTC (1742506329) Mar 20 21:32:09.910372 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 20 21:32:09.910384 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 20 21:32:09.910392 kernel: efifb: probing for efifb Mar 20 21:32:09.910405 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 20 21:32:09.910413 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 20 21:32:09.910421 kernel: efifb: scrolling: redraw Mar 20 21:32:09.910429 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 20 21:32:09.910438 kernel: Console: switching to colour frame buffer device 160x50 Mar 20 21:32:09.910446 kernel: fb0: EFI VGA frame buffer device Mar 20 21:32:09.910454 kernel: pstore: Using crash dump compression: deflate Mar 20 21:32:09.910463 kernel: pstore: Registered efi_pstore as persistent store backend Mar 20 21:32:09.910471 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:32:09.910482 kernel: Segment Routing with IPv6 Mar 20 21:32:09.910490 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:32:09.910498 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:32:09.910507 kernel: Key type dns_resolver registered Mar 20 21:32:09.910515 kernel: IPI shorthand broadcast: enabled Mar 20 21:32:09.910523 kernel: sched_clock: Marking stable (909002896, 164266713)->(1127903718, -54634109) Mar 20 21:32:09.910531 kernel: registered taskstats version 1 Mar 20 21:32:09.910540 kernel: Loading compiled-in X.509 certificates Mar 20 21:32:09.910548 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 9e7923b67df1c6f0613bc4380f7ea8de9ce851ac' Mar 20 21:32:09.910559 kernel: Key type .fscrypt registered Mar 20 21:32:09.910567 kernel: Key type fscrypt-provisioning registered Mar 20 21:32:09.910576 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:32:09.910649 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:32:09.910657 kernel: ima: No architecture policies found Mar 20 21:32:09.910666 kernel: clk: Disabling unused clocks Mar 20 21:32:09.910674 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 20 21:32:09.910683 kernel: Write protecting the kernel read-only data: 40960k Mar 20 21:32:09.910691 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 20 21:32:09.910702 kernel: Run /init as init process Mar 20 21:32:09.910711 kernel: with arguments: Mar 20 21:32:09.910719 kernel: /init Mar 20 21:32:09.910727 kernel: with environment: Mar 20 21:32:09.910735 kernel: HOME=/ Mar 20 21:32:09.910743 kernel: TERM=linux Mar 20 21:32:09.910752 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:32:09.910761 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:32:09.910773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:32:09.910785 systemd[1]: Detected virtualization kvm. Mar 20 21:32:09.910794 systemd[1]: Detected architecture x86-64. Mar 20 21:32:09.910802 systemd[1]: Running in initrd. Mar 20 21:32:09.910811 systemd[1]: No hostname configured, using default hostname. Mar 20 21:32:09.910820 systemd[1]: Hostname set to . Mar 20 21:32:09.910828 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:32:09.910837 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:32:09.910848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:32:09.910857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:32:09.910867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:32:09.910876 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:32:09.910885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:32:09.910894 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:32:09.910905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:32:09.910916 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:32:09.910925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:32:09.910934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:32:09.910943 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:32:09.910952 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:32:09.910961 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:32:09.910969 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:32:09.910978 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:32:09.910989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:32:09.910998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:32:09.911007 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:32:09.911016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:32:09.911025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:32:09.911033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:32:09.911042 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:32:09.911051 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:32:09.911060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:32:09.911071 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:32:09.911080 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:32:09.912253 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:32:09.912267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:32:09.912276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:09.912285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:32:09.912293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:32:09.912306 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:32:09.912315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:32:09.912358 systemd-journald[191]: Collecting audit messages is disabled. Mar 20 21:32:09.912381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:09.912391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:32:09.912399 systemd-journald[191]: Journal started Mar 20 21:32:09.912418 systemd-journald[191]: Runtime Journal (/run/log/journal/b70e6e4813174c35bd8f27f2a45e8cec) is 6M, max 48.2M, 42.2M free. Mar 20 21:32:09.901043 systemd-modules-load[194]: Inserted module 'overlay' Mar 20 21:32:09.916395 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:32:09.918602 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:32:09.930601 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:32:09.933070 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 20 21:32:09.934005 kernel: Bridge firewalling registered Mar 20 21:32:09.935880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:32:09.938899 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:32:09.941537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:32:09.946709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:32:09.949387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:32:09.953125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:32:09.955965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:32:09.960413 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:32:09.962529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:32:09.980293 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:32:09.990129 dracut-cmdline[229]: dracut-dracut-053 Mar 20 21:32:09.995860 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=619bfa043b53ac975036e415994a80721794ae8277072d0a93c174b4f7768019 Mar 20 21:32:10.038956 systemd-resolved[232]: Positive Trust Anchors: Mar 20 21:32:10.038973 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:32:10.039003 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:32:10.041851 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 20 21:32:10.042989 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:32:10.049967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:32:10.071610 kernel: SCSI subsystem initialized Mar 20 21:32:10.080609 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:32:10.092611 kernel: iscsi: registered transport (tcp) Mar 20 21:32:10.113607 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:32:10.113641 kernel: QLogic iSCSI HBA Driver Mar 20 21:32:10.156034 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:32:10.158620 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:32:10.195678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:32:10.195715 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:32:10.196710 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:32:10.236603 kernel: raid6: avx2x4 gen() 29481 MB/s Mar 20 21:32:10.253598 kernel: raid6: avx2x2 gen() 30237 MB/s Mar 20 21:32:10.270685 kernel: raid6: avx2x1 gen() 25395 MB/s Mar 20 21:32:10.270704 kernel: raid6: using algorithm avx2x2 gen() 30237 MB/s Mar 20 21:32:10.288724 kernel: raid6: .... xor() 19313 MB/s, rmw enabled Mar 20 21:32:10.288783 kernel: raid6: using avx2x2 recovery algorithm Mar 20 21:32:10.313645 kernel: xor: automatically using best checksumming function avx Mar 20 21:32:10.516626 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:32:10.529922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:32:10.532705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:32:10.570738 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 20 21:32:10.576268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:32:10.580697 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:32:10.604849 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 20 21:32:10.641070 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:32:10.644832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:32:10.734984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:32:10.739272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:32:10.763026 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:32:10.767196 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:32:10.769994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:32:10.772717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:32:10.775714 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 20 21:32:10.816252 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:32:10.816447 kernel: cryptd: max_cpu_qlen set to 1000 Mar 20 21:32:10.816468 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:32:10.816479 kernel: GPT:9289727 != 19775487 Mar 20 21:32:10.816490 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:32:10.816500 kernel: GPT:9289727 != 19775487 Mar 20 21:32:10.816510 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:32:10.816521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:32:10.816531 kernel: libata version 3.00 loaded. Mar 20 21:32:10.779731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:32:10.805227 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:32:10.820962 kernel: AVX2 version of gcm_enc/dec engaged. Mar 20 21:32:10.820992 kernel: AES CTR mode by8 optimization enabled Mar 20 21:32:10.813174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:32:10.813358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:32:10.815046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:32:10.831326 kernel: ahci 0000:00:1f.2: version 3.0 Mar 20 21:32:10.863912 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 20 21:32:10.863934 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 20 21:32:10.864093 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 20 21:32:10.864240 kernel: scsi host0: ahci Mar 20 21:32:10.864408 kernel: scsi host1: ahci Mar 20 21:32:10.864610 kernel: BTRFS: device fsid 48a514e8-9ecc-46c2-935b-caca347f921e devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) Mar 20 21:32:10.864624 kernel: scsi host2: ahci Mar 20 21:32:10.864780 kernel: scsi host3: ahci Mar 20 21:32:10.864943 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (470) Mar 20 21:32:10.864955 kernel: scsi host4: ahci Mar 20 21:32:10.865097 kernel: scsi host5: ahci Mar 20 21:32:10.865293 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 20 21:32:10.865318 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 20 21:32:10.865332 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 20 21:32:10.865346 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 20 21:32:10.865365 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 20 21:32:10.865377 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 20 21:32:10.816696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:32:10.818157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:10.823007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:10.828991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:10.877921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:32:10.887642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:32:10.896220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:32:10.896814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:32:10.911631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:32:10.912990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:32:10.915059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:32:10.915116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:10.917502 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:10.925183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:10.925809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:32:10.937541 disk-uuid[559]: Primary Header is updated. Mar 20 21:32:10.937541 disk-uuid[559]: Secondary Entries is updated. Mar 20 21:32:10.937541 disk-uuid[559]: Secondary Header is updated. Mar 20 21:32:10.941063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:32:10.946603 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:32:10.947936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:10.952250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:32:11.003761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:32:11.176760 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 20 21:32:11.176849 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 20 21:32:11.176865 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 20 21:32:11.178607 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 20 21:32:11.178650 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 20 21:32:11.179610 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 20 21:32:11.180611 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 20 21:32:11.181818 kernel: ata3.00: applying bridge limits Mar 20 21:32:11.181837 kernel: ata3.00: configured for UDMA/100 Mar 20 21:32:11.182600 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 20 21:32:11.235709 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 20 21:32:11.256514 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 20 21:32:11.256535 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 20 21:32:11.948368 disk-uuid[562]: The operation has completed successfully. Mar 20 21:32:11.950349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:32:11.985447 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:32:11.985660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:32:12.027424 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:32:12.046440 sh[600]: Success Mar 20 21:32:12.059647 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 20 21:32:12.098910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:32:12.102938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:32:12.119731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:32:12.126565 kernel: BTRFS info (device dm-0): first mount of filesystem 48a514e8-9ecc-46c2-935b-caca347f921e Mar 20 21:32:12.126607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:32:12.126618 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:32:12.126630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:32:12.127315 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:32:12.132700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:32:12.135363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:32:12.136354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:32:12.140922 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:32:12.171496 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:32:12.171532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:32:12.171543 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:32:12.175595 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:32:12.180615 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:32:12.187312 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:32:12.189869 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:32:12.249440 ignition[695]: Ignition 2.20.0 Mar 20 21:32:12.249454 ignition[695]: Stage: fetch-offline Mar 20 21:32:12.249493 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:12.249504 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:12.249624 ignition[695]: parsed url from cmdline: "" Mar 20 21:32:12.249628 ignition[695]: no config URL provided Mar 20 21:32:12.249633 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:32:12.249643 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:32:12.249670 ignition[695]: op(1): [started] loading QEMU firmware config module Mar 20 21:32:12.249675 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:32:12.258427 ignition[695]: op(1): [finished] loading QEMU firmware config module Mar 20 21:32:12.281213 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:32:12.283605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:32:12.301727 ignition[695]: parsing config with SHA512: 7e9270bceb83447bb014210fa293b23adc16431cc3e81f0d06bbcbaaee6a41df62032a7700f0537b0acfa4c552190d94cd1cf61a72b142df7ee050c1340aebcf Mar 20 21:32:12.307495 unknown[695]: fetched base config from "system" Mar 20 21:32:12.307517 unknown[695]: fetched user config from "qemu" Mar 20 21:32:12.309458 ignition[695]: fetch-offline: fetch-offline passed Mar 20 21:32:12.309540 ignition[695]: Ignition finished successfully Mar 20 21:32:12.312858 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:32:12.324166 systemd-networkd[786]: lo: Link UP Mar 20 21:32:12.324177 systemd-networkd[786]: lo: Gained carrier Mar 20 21:32:12.325887 systemd-networkd[786]: Enumeration completed Mar 20 21:32:12.325975 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:32:12.326302 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:32:12.326307 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:32:12.327183 systemd-networkd[786]: eth0: Link UP Mar 20 21:32:12.327187 systemd-networkd[786]: eth0: Gained carrier Mar 20 21:32:12.327193 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:32:12.328234 systemd[1]: Reached target network.target - Network. Mar 20 21:32:12.330297 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:32:12.331119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:32:12.342694 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:32:12.358259 ignition[790]: Ignition 2.20.0 Mar 20 21:32:12.358280 ignition[790]: Stage: kargs Mar 20 21:32:12.358449 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:12.358461 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:12.359313 ignition[790]: kargs: kargs passed Mar 20 21:32:12.359358 ignition[790]: Ignition finished successfully Mar 20 21:32:12.362993 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:32:12.366218 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:32:12.392073 ignition[800]: Ignition 2.20.0 Mar 20 21:32:12.392084 ignition[800]: Stage: disks Mar 20 21:32:12.392257 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:12.392278 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:12.393125 ignition[800]: disks: disks passed Mar 20 21:32:12.395557 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:32:12.393170 ignition[800]: Ignition finished successfully Mar 20 21:32:12.397317 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:32:12.399108 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:32:12.400311 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:32:12.401941 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:32:12.403934 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:32:12.406815 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:32:12.435029 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:32:12.441974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:32:12.443247 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:32:12.538606 kernel: EXT4-fs (vda9): mounted filesystem 79cdbe74-6884-4c57-b04d-c9a431509f16 r/w with ordered data mode. Quota mode: none. Mar 20 21:32:12.539299 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:32:12.540539 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:32:12.543199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:32:12.545775 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:32:12.546958 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:32:12.547007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:32:12.547039 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:32:12.556839 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:32:12.559718 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:32:12.563055 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (819) Mar 20 21:32:12.565650 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:32:12.565666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:32:12.565677 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:32:12.569641 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:32:12.571052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:32:12.599875 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:32:12.604806 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:32:12.609357 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:32:12.613930 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:32:12.705066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:32:12.707529 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:32:12.708626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:32:12.737621 kernel: BTRFS info (device vda6): last unmount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:32:12.748219 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:32:12.765195 ignition[934]: INFO : Ignition 2.20.0 Mar 20 21:32:12.765195 ignition[934]: INFO : Stage: mount Mar 20 21:32:12.767277 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:12.767277 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:12.769950 ignition[934]: INFO : mount: mount passed Mar 20 21:32:12.770798 ignition[934]: INFO : Ignition finished successfully Mar 20 21:32:12.773668 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:32:12.776696 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:32:13.125135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:32:13.126828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:32:13.148615 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (946) Mar 20 21:32:13.151295 kernel: BTRFS info (device vda6): first mount of filesystem c415ef49-5595-4a0b-ba48-8f3e642f303e Mar 20 21:32:13.151323 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 21:32:13.151339 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:32:13.155617 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:32:13.156986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:32:13.199835 ignition[963]: INFO : Ignition 2.20.0 Mar 20 21:32:13.199835 ignition[963]: INFO : Stage: files Mar 20 21:32:13.201798 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:13.201798 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:13.201798 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:32:13.205694 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:32:13.205694 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:32:13.208706 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:32:13.210315 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:32:13.212335 unknown[963]: wrote ssh authorized keys file for user: core Mar 20 21:32:13.213820 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:32:13.215524 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 20 21:32:13.215524 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 20 21:32:13.262978 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:32:13.437873 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 20 21:32:13.437873 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:32:13.443053 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 20 21:32:13.617836 systemd-networkd[786]: eth0: Gained IPv6LL Mar 20 21:32:13.885145 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 21:32:13.973295 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:32:13.975115 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:32:13.977000 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:32:13.978779 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:32:13.980764 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:32:13.982512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:32:13.984328 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:32:13.986072 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:32:13.987826 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:32:13.990048 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:32:13.992007 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:32:13.992007 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 20 21:32:13.992007 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 20 21:32:13.992007 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 20 21:32:13.992007 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 20 21:32:14.284945 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 21:32:14.800759 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 20 21:32:14.800759 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 21:32:14.804525 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 21:32:14.806413 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:32:14.834235 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:32:14.838987 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:32:14.840703 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:32:14.840703 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:32:14.840703 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:32:14.840703 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:32:14.840703 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:32:14.840703 ignition[963]: INFO : files: files passed Mar 20 21:32:14.840703 ignition[963]: INFO : Ignition finished successfully Mar 20 21:32:14.852659 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:32:14.854339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:32:14.857348 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:32:14.877346 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:32:14.878538 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:32:14.880816 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:32:14.882377 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:32:14.882377 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:32:14.885535 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:32:14.885555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:32:14.886348 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:32:14.887384 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:32:14.955206 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:32:14.955339 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:32:14.957795 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:32:14.959754 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:32:14.961771 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:32:14.962639 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:32:14.990034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:32:14.993887 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:32:15.015336 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:32:15.017722 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:32:15.019094 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:32:15.021044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:32:15.021176 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:32:15.023412 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:32:15.025217 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:32:15.027355 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:32:15.029413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:32:15.031420 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:32:15.033648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:32:15.035778 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:32:15.038056 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:32:15.040053 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:32:15.042219 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:32:15.043976 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:32:15.044101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:32:15.046227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:32:15.047850 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:32:15.049912 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:32:15.050040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:32:15.052155 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:32:15.052277 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:32:15.054473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:32:15.054604 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:32:15.056571 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:32:15.058292 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:32:15.061692 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:32:15.063914 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:32:15.065875 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:32:15.067662 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:32:15.067764 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:32:15.069661 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:32:15.069748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:32:15.072092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:32:15.072224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:32:15.074151 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:32:15.074267 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:32:15.077007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:32:15.078628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:32:15.078744 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:32:15.081436 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:32:15.082447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:32:15.082573 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:32:15.084776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:32:15.084885 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:32:15.091964 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:32:15.092071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:32:15.123877 ignition[1018]: INFO : Ignition 2.20.0 Mar 20 21:32:15.123877 ignition[1018]: INFO : Stage: umount Mar 20 21:32:15.125682 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:32:15.125682 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:32:15.125682 ignition[1018]: INFO : umount: umount passed Mar 20 21:32:15.125682 ignition[1018]: INFO : Ignition finished successfully Mar 20 21:32:15.126236 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:32:15.126861 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:32:15.126975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:32:15.129837 systemd[1]: Stopped target network.target - Network. Mar 20 21:32:15.131554 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:32:15.131650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:32:15.133435 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:32:15.133486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:32:15.135283 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:32:15.135333 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:32:15.137447 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:32:15.137494 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:32:15.139466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:32:15.141382 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:32:15.149462 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:32:15.149628 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:32:15.154266 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:32:15.154513 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:32:15.154658 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:32:15.158131 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:32:15.158919 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:32:15.158987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:32:15.161269 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:32:15.162568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:32:15.162648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:32:15.164918 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:32:15.164977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:32:15.167240 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:32:15.167291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:32:15.169248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:32:15.169306 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:32:15.171765 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:32:15.175266 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:32:15.175348 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:32:15.189887 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:32:15.191060 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:32:15.194257 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:32:15.195305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:32:15.197937 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:32:15.199010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:32:15.201134 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:32:15.201180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:32:15.204160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:32:15.205136 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:32:15.207561 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:32:15.207637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:32:15.210640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:32:15.210697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:32:15.215054 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:32:15.216237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:32:15.217422 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:32:15.221125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:32:15.221191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:15.225649 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 21:32:15.227073 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:32:15.236861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:32:15.238019 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:32:15.297085 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:32:15.297223 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:32:15.300126 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:32:15.302135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:32:15.302188 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:32:15.305915 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:32:15.325112 systemd[1]: Switching root. Mar 20 21:32:15.352914 systemd-journald[191]: Journal stopped Mar 20 21:32:16.572214 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Mar 20 21:32:16.572315 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:32:16.572334 kernel: SELinux: policy capability open_perms=1 Mar 20 21:32:16.572349 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:32:16.572364 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:32:16.572379 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:32:16.572400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:32:16.572415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:32:16.572430 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:32:16.572445 kernel: audit: type=1403 audit(1742506335.695:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:32:16.572464 systemd[1]: Successfully loaded SELinux policy in 40.154ms. Mar 20 21:32:16.572497 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.596ms. Mar 20 21:32:16.572514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:32:16.572530 systemd[1]: Detected virtualization kvm. Mar 20 21:32:16.572546 systemd[1]: Detected architecture x86-64. Mar 20 21:32:16.572562 systemd[1]: Detected first boot. Mar 20 21:32:16.572578 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:32:16.572633 zram_generator::config[1065]: No configuration found. Mar 20 21:32:16.572659 kernel: Guest personality initialized and is inactive Mar 20 21:32:16.572675 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 20 21:32:16.572690 kernel: Initialized host personality Mar 20 21:32:16.572705 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:32:16.572720 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:32:16.572743 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:32:16.572759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:32:16.572775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:32:16.572791 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:32:16.572810 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:32:16.572826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:32:16.572842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:32:16.572859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:32:16.572876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:32:16.572893 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:32:16.572909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:32:16.572925 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:32:16.572942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:32:16.572962 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:32:16.572978 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:32:16.572994 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:32:16.573011 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:32:16.573029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:32:16.573048 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 20 21:32:16.573069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:32:16.573094 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:32:16.573114 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:32:16.573139 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:32:16.573158 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:32:16.573183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:32:16.573203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:32:16.573217 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:32:16.573232 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:32:16.573250 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:32:16.573268 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:32:16.573289 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:32:16.573304 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:32:16.573318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:32:16.573342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:32:16.573365 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:32:16.573392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:32:16.573420 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:32:16.573444 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:32:16.573471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:16.573504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:32:16.573531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:32:16.573561 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:32:16.574275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:32:16.574300 systemd[1]: Reached target machines.target - Containers. Mar 20 21:32:16.574317 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:32:16.574333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:32:16.574350 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:32:16.574371 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:32:16.574387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:32:16.574404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:32:16.574422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:32:16.574438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:32:16.574455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:32:16.574472 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:32:16.574490 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:32:16.574510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:32:16.574527 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:32:16.574542 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:32:16.574560 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:32:16.574577 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:32:16.574608 kernel: loop: module loaded Mar 20 21:32:16.574625 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:32:16.574640 kernel: fuse: init (API version 7.39) Mar 20 21:32:16.574656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:32:16.574678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:32:16.574695 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:32:16.574712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:32:16.574728 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:32:16.574744 systemd[1]: Stopped verity-setup.service. Mar 20 21:32:16.574765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:16.574782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:32:16.574799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:32:16.574816 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:32:16.574832 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:32:16.574883 systemd-journald[1134]: Collecting audit messages is disabled. Mar 20 21:32:16.574925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:32:16.574943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:32:16.574959 systemd-journald[1134]: Journal started Mar 20 21:32:16.574989 systemd-journald[1134]: Runtime Journal (/run/log/journal/b70e6e4813174c35bd8f27f2a45e8cec) is 6M, max 48.2M, 42.2M free. Mar 20 21:32:16.307493 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:32:16.322721 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:32:16.323225 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:32:16.582681 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:32:16.582754 kernel: ACPI: bus type drm_connector registered Mar 20 21:32:16.579741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:32:16.582718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:32:16.584904 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:32:16.585186 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:32:16.587095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:32:16.587366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:32:16.589056 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:32:16.589310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:32:16.591112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:32:16.591383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:32:16.593492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:32:16.593773 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:32:16.595708 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:32:16.595969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:32:16.598073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:32:16.599577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:32:16.601337 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:32:16.603054 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:32:16.623050 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:32:16.626014 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:32:16.628386 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:32:16.629537 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:32:16.629562 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:32:16.631946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:32:16.638684 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:32:16.641770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:32:16.643298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:32:16.646457 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:32:16.654208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:32:16.655607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:32:16.659522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:32:16.660754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:32:16.664764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:32:16.669910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:32:16.672882 systemd-journald[1134]: Time spent on flushing to /var/log/journal/b70e6e4813174c35bd8f27f2a45e8cec is 19.872ms for 1060 entries. Mar 20 21:32:16.672882 systemd-journald[1134]: System Journal (/var/log/journal/b70e6e4813174c35bd8f27f2a45e8cec) is 8M, max 195.6M, 187.6M free. Mar 20 21:32:16.702081 systemd-journald[1134]: Received client request to flush runtime journal. Mar 20 21:32:16.702121 kernel: loop0: detected capacity change from 0 to 218376 Mar 20 21:32:16.675788 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:32:16.686228 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:32:16.687655 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:32:16.689240 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:32:16.691314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:32:16.692995 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:32:16.704746 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:32:16.711390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:32:16.717200 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:32:16.721384 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:32:16.727361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:32:16.732379 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:32:16.737888 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 21:32:16.751458 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:32:16.755546 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:32:16.759280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:32:16.870681 kernel: loop1: detected capacity change from 0 to 109808 Mar 20 21:32:16.903028 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Mar 20 21:32:16.903046 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Mar 20 21:32:16.908616 kernel: loop2: detected capacity change from 0 to 151640 Mar 20 21:32:16.910288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:32:17.026611 kernel: loop3: detected capacity change from 0 to 218376 Mar 20 21:32:17.040612 kernel: loop4: detected capacity change from 0 to 109808 Mar 20 21:32:17.052611 kernel: loop5: detected capacity change from 0 to 151640 Mar 20 21:32:17.072423 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:32:17.073068 (sd-merge)[1210]: Merged extensions into '/usr'. Mar 20 21:32:17.079848 systemd[1]: Reload requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:32:17.079864 systemd[1]: Reloading... Mar 20 21:32:17.159685 zram_generator::config[1237]: No configuration found. Mar 20 21:32:17.190330 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:32:17.301290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:32:17.367394 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:32:17.367479 systemd[1]: Reloading finished in 287 ms. Mar 20 21:32:17.386562 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:32:17.388185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:32:17.406228 systemd[1]: Starting ensure-sysext.service... Mar 20 21:32:17.408510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:32:17.421912 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:32:17.421944 systemd[1]: Reloading... Mar 20 21:32:17.445537 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:32:17.446343 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:32:17.448025 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:32:17.448351 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Mar 20 21:32:17.448511 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Mar 20 21:32:17.455219 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:32:17.455231 systemd-tmpfiles[1276]: Skipping /boot Mar 20 21:32:17.472005 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:32:17.472021 systemd-tmpfiles[1276]: Skipping /boot Mar 20 21:32:17.491603 zram_generator::config[1305]: No configuration found. Mar 20 21:32:17.609420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:32:17.675853 systemd[1]: Reloading finished in 253 ms. Mar 20 21:32:17.689620 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:32:17.712751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:32:17.722202 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:32:17.724933 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:32:17.734322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:32:17.738108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:32:17.741849 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:32:17.748091 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:32:17.755914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.756084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:32:17.757877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:32:17.760931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:32:17.765247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:32:17.766795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:32:17.766907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:32:17.770908 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:32:17.772055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.773339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:32:17.773793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:32:17.776118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:32:17.782851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:32:17.785634 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:32:17.787492 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:32:17.788057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:32:17.791345 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Mar 20 21:32:17.791698 augenrules[1373]: No rules Mar 20 21:32:17.792749 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:32:17.793031 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:32:17.802182 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:32:17.806720 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.807069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:32:17.808883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:32:17.811941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:32:17.820869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:32:17.822112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:32:17.822265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:32:17.825770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:32:17.829710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.831027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:32:17.833003 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:32:17.834817 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:32:17.837705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:32:17.838223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:32:17.846205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:32:17.846496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:32:17.849148 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:32:17.849435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:32:17.859835 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:32:17.871161 systemd[1]: Finished ensure-sysext.service. Mar 20 21:32:17.876339 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 20 21:32:17.877611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1403) Mar 20 21:32:17.878396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.881775 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:32:17.883055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:32:17.886299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:32:17.890120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:32:17.927852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:32:17.932196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:32:17.933688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:32:17.933731 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:32:17.937772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:32:17.947965 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:32:17.949135 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:32:17.949162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 21:32:17.949866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:32:17.950090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:32:17.953028 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:32:17.953237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:32:17.954901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:32:17.955123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:32:17.968763 systemd-resolved[1347]: Positive Trust Anchors: Mar 20 21:32:17.968780 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:32:17.980528 augenrules[1419]: /sbin/augenrules: No change Mar 20 21:32:17.968811 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:32:17.976788 systemd-resolved[1347]: Defaulting to hostname 'linux'. Mar 20 21:32:17.977360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:32:17.978810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:32:17.985164 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:32:17.985515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:32:17.999481 augenrules[1453]: No rules Mar 20 21:32:18.001946 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:32:18.004411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:32:18.007731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:32:18.007791 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:32:18.008151 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:32:18.008413 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:32:18.047642 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 20 21:32:18.052611 kernel: ACPI: button: Power Button [PWRF] Mar 20 21:32:18.057978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:32:18.067383 systemd-networkd[1428]: lo: Link UP Mar 20 21:32:18.067393 systemd-networkd[1428]: lo: Gained carrier Mar 20 21:32:18.069336 systemd-networkd[1428]: Enumeration completed Mar 20 21:32:18.069432 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:32:18.070677 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:32:18.070686 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:32:18.070721 systemd[1]: Reached target network.target - Network. Mar 20 21:32:18.072318 systemd-networkd[1428]: eth0: Link UP Mar 20 21:32:18.072327 systemd-networkd[1428]: eth0: Gained carrier Mar 20 21:32:18.072341 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:32:18.073747 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:32:18.078718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:32:18.087470 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:32:18.091730 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 20 21:32:18.094825 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 20 21:32:18.095025 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 20 21:32:18.095229 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 20 21:32:18.104613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 20 21:32:18.114334 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:32:18.942433 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:32:18.942775 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:32:18.944543 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:32:18.944885 systemd-resolved[1347]: Clock change detected. Flushing caches. Mar 20 21:32:18.945082 systemd-timesyncd[1434]: Initial clock synchronization to Thu 2025-03-20 21:32:18.942270 UTC. Mar 20 21:32:18.993062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:32:19.030700 kernel: mousedev: PS/2 mouse device common for all mice Mar 20 21:32:19.106843 kernel: kvm_amd: TSC scaling supported Mar 20 21:32:19.106981 kernel: kvm_amd: Nested Virtualization enabled Mar 20 21:32:19.107045 kernel: kvm_amd: Nested Paging enabled Mar 20 21:32:19.107072 kernel: kvm_amd: LBR virtualization supported Mar 20 21:32:19.107901 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 20 21:32:19.107934 kernel: kvm_amd: Virtual GIF supported Mar 20 21:32:19.128706 kernel: EDAC MC: Ver: 3.0.0 Mar 20 21:32:19.168190 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:32:19.170107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:32:19.174742 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:32:19.193612 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:32:19.230188 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:32:19.233138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:32:19.234416 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:32:19.235765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:32:19.237145 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:32:19.238925 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:32:19.240265 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:32:19.241558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:32:19.242841 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:32:19.242883 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:32:19.243877 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:32:19.246420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:32:19.249938 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:32:19.254313 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:32:19.255922 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:32:19.257308 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:32:19.262564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:32:19.264368 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:32:19.267072 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:32:19.269159 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:32:19.270556 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:32:19.271725 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:32:19.272245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:32:19.272268 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:32:19.273451 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:32:19.275824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:32:19.278685 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:32:19.280719 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:32:19.283750 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:32:19.285187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:32:19.286581 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:32:19.291858 jq[1487]: false Mar 20 21:32:19.292206 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:32:19.296277 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:32:19.300370 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:32:19.307109 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:32:19.309552 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:32:19.310067 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:32:19.310757 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:32:19.314183 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:32:19.322754 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:32:19.324722 extend-filesystems[1488]: Found loop3 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found loop4 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found loop5 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found sr0 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda1 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda2 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda3 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found usr Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda4 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda6 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda7 Mar 20 21:32:19.328472 extend-filesystems[1488]: Found vda9 Mar 20 21:32:19.328472 extend-filesystems[1488]: Checking size of /dev/vda9 Mar 20 21:32:19.358887 jq[1502]: true Mar 20 21:32:19.325685 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:32:19.359060 update_engine[1501]: I20250320 21:32:19.348709 1501 main.cc:92] Flatcar Update Engine starting Mar 20 21:32:19.359060 update_engine[1501]: I20250320 21:32:19.351735 1501 update_check_scheduler.cc:74] Next update check in 4m59s Mar 20 21:32:19.334099 dbus-daemon[1486]: [system] SELinux support is enabled Mar 20 21:32:19.359509 extend-filesystems[1488]: Resized partition /dev/vda9 Mar 20 21:32:19.362771 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:32:19.325939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:32:19.362945 extend-filesystems[1512]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:32:19.373190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1391) Mar 20 21:32:19.326276 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:32:19.326524 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:32:19.381695 tar[1507]: linux-amd64/LICENSE Mar 20 21:32:19.381695 tar[1507]: linux-amd64/helm Mar 20 21:32:19.331896 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:32:19.388800 jq[1511]: true Mar 20 21:32:19.332237 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:32:19.345137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:32:19.367042 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:32:19.399209 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:32:19.373763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:32:19.373791 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:32:19.375578 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:32:19.423847 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:32:19.423847 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:32:19.423847 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:32:19.375594 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:32:19.440153 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Mar 20 21:32:19.377439 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:32:19.385467 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:32:19.425359 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:32:19.425797 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:32:19.445581 systemd-logind[1496]: Watching system buttons on /dev/input/event1 (Power Button) Mar 20 21:32:19.445611 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 20 21:32:19.450857 systemd-logind[1496]: New seat seat0. Mar 20 21:32:19.509727 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:32:19.518627 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:32:19.518951 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:32:19.521831 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:32:19.524364 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:32:19.774718 containerd[1513]: time="2025-03-20T21:32:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:32:19.777153 containerd[1513]: time="2025-03-20T21:32:19.777105860Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:32:19.791378 containerd[1513]: time="2025-03-20T21:32:19.791323944Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.67µs" Mar 20 21:32:19.791378 containerd[1513]: time="2025-03-20T21:32:19.791371674Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:32:19.791456 containerd[1513]: time="2025-03-20T21:32:19.791396561Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:32:19.791686 containerd[1513]: time="2025-03-20T21:32:19.791655156Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:32:19.791834 containerd[1513]: time="2025-03-20T21:32:19.791820075Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:32:19.791911 containerd[1513]: time="2025-03-20T21:32:19.791896208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792045 containerd[1513]: time="2025-03-20T21:32:19.792027745Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792114 containerd[1513]: time="2025-03-20T21:32:19.792100441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792520 containerd[1513]: time="2025-03-20T21:32:19.792498949Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792593 containerd[1513]: time="2025-03-20T21:32:19.792578809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792664 containerd[1513]: time="2025-03-20T21:32:19.792631728Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792714 containerd[1513]: time="2025-03-20T21:32:19.792701228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:32:19.792876 containerd[1513]: time="2025-03-20T21:32:19.792860076Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:32:19.793200 containerd[1513]: time="2025-03-20T21:32:19.793181780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:32:19.793280 containerd[1513]: time="2025-03-20T21:32:19.793265236Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:32:19.793330 containerd[1513]: time="2025-03-20T21:32:19.793315801Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:32:19.793413 containerd[1513]: time="2025-03-20T21:32:19.793398657Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:32:19.794882 containerd[1513]: time="2025-03-20T21:32:19.794831104Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:32:19.795184 containerd[1513]: time="2025-03-20T21:32:19.794967600Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:32:19.801594 containerd[1513]: time="2025-03-20T21:32:19.801566630Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:32:19.801713 containerd[1513]: time="2025-03-20T21:32:19.801697426Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:32:19.801815 containerd[1513]: time="2025-03-20T21:32:19.801799988Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:32:19.801875 containerd[1513]: time="2025-03-20T21:32:19.801862415Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:32:19.801934 containerd[1513]: time="2025-03-20T21:32:19.801921757Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:32:19.802011 containerd[1513]: time="2025-03-20T21:32:19.801997058Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802052652Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802074543Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802090343Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802102295Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802114939Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802130338Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802279177Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802308592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802322488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802334831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802349810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802373514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802388292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802410513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:32:19.802756 containerd[1513]: time="2025-03-20T21:32:19.802425091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:32:19.803169 containerd[1513]: time="2025-03-20T21:32:19.802437855Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:32:19.803169 containerd[1513]: time="2025-03-20T21:32:19.802451440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:32:19.803169 containerd[1513]: time="2025-03-20T21:32:19.802544325Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:32:19.803169 containerd[1513]: time="2025-03-20T21:32:19.802560525Z" level=info msg="Start snapshots syncer" Mar 20 21:32:19.803169 containerd[1513]: time="2025-03-20T21:32:19.802586714Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:32:19.803270 containerd[1513]: time="2025-03-20T21:32:19.802869274Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:32:19.803270 containerd[1513]: time="2025-03-20T21:32:19.802911794Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.802976996Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803084338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803104085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803116227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803128581Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803168906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803191619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803206016Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803233467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803257873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803270667Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803301816Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803333275Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:32:19.803447 containerd[1513]: time="2025-03-20T21:32:19.803344706Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803354755Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803363712Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803373991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803387737Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803408756Z" level=info msg="runtime interface created" Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803414737Z" level=info msg="created NRI interface" Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803423143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803434234Z" level=info msg="Connect containerd service" Mar 20 21:32:19.803731 containerd[1513]: time="2025-03-20T21:32:19.803459922Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:32:19.806275 containerd[1513]: time="2025-03-20T21:32:19.806238816Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:32:19.996483 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:32:20.033787 containerd[1513]: time="2025-03-20T21:32:20.033681164Z" level=info msg="Start subscribing containerd event" Mar 20 21:32:20.034047 containerd[1513]: time="2025-03-20T21:32:20.033937665Z" level=info msg="Start recovering state" Mar 20 21:32:20.034260 containerd[1513]: time="2025-03-20T21:32:20.033954487Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:32:20.034260 containerd[1513]: time="2025-03-20T21:32:20.034200258Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:32:20.034432 containerd[1513]: time="2025-03-20T21:32:20.034401255Z" level=info msg="Start event monitor" Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034497676Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034515038Z" level=info msg="Start streaming server" Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034531930Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034544463Z" level=info msg="runtime interface starting up..." Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034552859Z" level=info msg="starting plugins..." Mar 20 21:32:20.035482 containerd[1513]: time="2025-03-20T21:32:20.034575682Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:32:20.034836 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:32:20.037378 containerd[1513]: time="2025-03-20T21:32:20.037322866Z" level=info msg="containerd successfully booted in 0.264137s" Mar 20 21:32:20.041610 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:32:20.045077 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:32:20.064660 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:32:20.065071 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:32:20.067443 tar[1507]: linux-amd64/README.md Mar 20 21:32:20.068478 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:32:20.087627 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:32:20.091490 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:32:20.094608 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:32:20.096778 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 20 21:32:20.098137 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:32:20.522892 systemd-networkd[1428]: eth0: Gained IPv6LL Mar 20 21:32:20.526451 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:32:20.528348 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:32:20.531057 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:32:20.533477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:20.536047 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:32:20.563728 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:32:20.584302 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:32:20.584603 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:32:20.586333 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:32:21.685495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:21.687185 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:32:21.689386 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:32:21.690717 systemd[1]: Startup finished in 1.044s (kernel) + 5.986s (initrd) + 5.207s (userspace) = 12.238s. Mar 20 21:32:22.419854 kubelet[1614]: E0320 21:32:22.419766 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:32:22.424963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:32:22.425179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:32:22.425553 systemd[1]: kubelet.service: Consumed 1.736s CPU time, 254.5M memory peak. Mar 20 21:32:23.297545 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:32:23.298880 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:55202.service - OpenSSH per-connection server daemon (10.0.0.1:55202). Mar 20 21:32:23.370593 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 55202 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:23.372573 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:23.383821 systemd-logind[1496]: New session 1 of user core. Mar 20 21:32:23.385226 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:32:23.386543 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:32:23.412582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:32:23.415534 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:32:23.437245 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:32:23.439714 systemd-logind[1496]: New session c1 of user core. Mar 20 21:32:23.584527 systemd[1631]: Queued start job for default target default.target. Mar 20 21:32:23.595109 systemd[1631]: Created slice app.slice - User Application Slice. Mar 20 21:32:23.595137 systemd[1631]: Reached target paths.target - Paths. Mar 20 21:32:23.595180 systemd[1631]: Reached target timers.target - Timers. Mar 20 21:32:23.596849 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:32:23.608166 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:32:23.608316 systemd[1631]: Reached target sockets.target - Sockets. Mar 20 21:32:23.608363 systemd[1631]: Reached target basic.target - Basic System. Mar 20 21:32:23.608410 systemd[1631]: Reached target default.target - Main User Target. Mar 20 21:32:23.608449 systemd[1631]: Startup finished in 161ms. Mar 20 21:32:23.608902 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:32:23.610712 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:32:23.673804 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:55204.service - OpenSSH per-connection server daemon (10.0.0.1:55204). Mar 20 21:32:23.735087 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55204 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:23.736810 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:23.741146 systemd-logind[1496]: New session 2 of user core. Mar 20 21:32:23.750800 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:32:23.805205 sshd[1644]: Connection closed by 10.0.0.1 port 55204 Mar 20 21:32:23.805758 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:23.816497 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:55204.service: Deactivated successfully. Mar 20 21:32:23.818455 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:32:23.820214 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:32:23.821520 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:55206.service - OpenSSH per-connection server daemon (10.0.0.1:55206). Mar 20 21:32:23.822319 systemd-logind[1496]: Removed session 2. Mar 20 21:32:23.874858 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 55206 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:23.876412 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:23.880857 systemd-logind[1496]: New session 3 of user core. Mar 20 21:32:23.894804 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:32:23.943886 sshd[1652]: Connection closed by 10.0.0.1 port 55206 Mar 20 21:32:23.944308 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:23.952433 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:55206.service: Deactivated successfully. Mar 20 21:32:23.954368 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:32:23.956100 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:32:23.957422 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Mar 20 21:32:23.958167 systemd-logind[1496]: Removed session 3. Mar 20 21:32:24.007138 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:24.008906 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:24.013293 systemd-logind[1496]: New session 4 of user core. Mar 20 21:32:24.022768 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:32:24.076337 sshd[1660]: Connection closed by 10.0.0.1 port 55210 Mar 20 21:32:24.076668 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:24.089370 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:55210.service: Deactivated successfully. Mar 20 21:32:24.091278 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:32:24.093091 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:32:24.094318 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Mar 20 21:32:24.095004 systemd-logind[1496]: Removed session 4. Mar 20 21:32:24.150961 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:24.152411 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:24.156643 systemd-logind[1496]: New session 5 of user core. Mar 20 21:32:24.166752 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:32:24.224975 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:32:24.225312 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:32:24.245048 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 20 21:32:24.246914 sshd[1668]: Connection closed by 10.0.0.1 port 55226 Mar 20 21:32:24.247384 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:24.259501 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:55226.service: Deactivated successfully. Mar 20 21:32:24.261442 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:32:24.263228 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:32:24.264619 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:55238.service - OpenSSH per-connection server daemon (10.0.0.1:55238). Mar 20 21:32:24.265386 systemd-logind[1496]: Removed session 5. Mar 20 21:32:24.332956 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 55238 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:24.334669 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:24.339146 systemd-logind[1496]: New session 6 of user core. Mar 20 21:32:24.353857 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:32:24.408753 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:32:24.409084 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:32:24.413126 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 20 21:32:24.420059 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:32:24.420397 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:32:24.430393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:32:24.474573 augenrules[1701]: No rules Mar 20 21:32:24.476560 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:32:24.476880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:32:24.478041 sudo[1678]: pam_unix(sudo:session): session closed for user root Mar 20 21:32:24.479608 sshd[1677]: Connection closed by 10.0.0.1 port 55238 Mar 20 21:32:24.479920 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:24.488710 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:55238.service: Deactivated successfully. Mar 20 21:32:24.490961 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:32:24.492699 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:32:24.493904 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Mar 20 21:32:24.494587 systemd-logind[1496]: Removed session 6. Mar 20 21:32:24.541615 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:32:24.542978 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:32:24.547061 systemd-logind[1496]: New session 7 of user core. Mar 20 21:32:24.562763 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:32:24.620240 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:32:24.620597 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:32:25.320946 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:32:25.334970 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:32:25.926545 dockerd[1734]: time="2025-03-20T21:32:25.926466680Z" level=info msg="Starting up" Mar 20 21:32:25.929211 dockerd[1734]: time="2025-03-20T21:32:25.929181173Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:32:26.238389 dockerd[1734]: time="2025-03-20T21:32:26.238256149Z" level=info msg="Loading containers: start." Mar 20 21:32:26.427677 kernel: Initializing XFRM netlink socket Mar 20 21:32:26.502235 systemd-networkd[1428]: docker0: Link UP Mar 20 21:32:26.599014 dockerd[1734]: time="2025-03-20T21:32:26.598965526Z" level=info msg="Loading containers: done." Mar 20 21:32:26.614257 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3697909504-merged.mount: Deactivated successfully. Mar 20 21:32:26.615018 dockerd[1734]: time="2025-03-20T21:32:26.614962608Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:32:26.615082 dockerd[1734]: time="2025-03-20T21:32:26.615069919Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:32:26.615219 dockerd[1734]: time="2025-03-20T21:32:26.615194893Z" level=info msg="Daemon has completed initialization" Mar 20 21:32:26.651453 dockerd[1734]: time="2025-03-20T21:32:26.651378857Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:32:26.651470 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:32:27.154050 containerd[1513]: time="2025-03-20T21:32:27.153998728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 20 21:32:27.750024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533574535.mount: Deactivated successfully. Mar 20 21:32:28.616924 containerd[1513]: time="2025-03-20T21:32:28.616862505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:28.617673 containerd[1513]: time="2025-03-20T21:32:28.617570823Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 20 21:32:28.618729 containerd[1513]: time="2025-03-20T21:32:28.618696606Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:28.621316 containerd[1513]: time="2025-03-20T21:32:28.621286254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:28.622147 containerd[1513]: time="2025-03-20T21:32:28.622100051Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 1.46805765s" Mar 20 21:32:28.622147 containerd[1513]: time="2025-03-20T21:32:28.622139154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 20 21:32:28.622864 containerd[1513]: time="2025-03-20T21:32:28.622827896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 20 21:32:29.700048 containerd[1513]: time="2025-03-20T21:32:29.699987925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:29.700678 containerd[1513]: time="2025-03-20T21:32:29.700616314Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 20 21:32:29.701745 containerd[1513]: time="2025-03-20T21:32:29.701718432Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:29.704361 containerd[1513]: time="2025-03-20T21:32:29.704300686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:29.705033 containerd[1513]: time="2025-03-20T21:32:29.704992925Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.082131836s" Mar 20 21:32:29.705033 containerd[1513]: time="2025-03-20T21:32:29.705029033Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 20 21:32:29.705669 containerd[1513]: time="2025-03-20T21:32:29.705631142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 20 21:32:31.021986 containerd[1513]: time="2025-03-20T21:32:31.021912290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:31.022911 containerd[1513]: time="2025-03-20T21:32:31.022808912Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 20 21:32:31.024225 containerd[1513]: time="2025-03-20T21:32:31.024192047Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:31.027135 containerd[1513]: time="2025-03-20T21:32:31.027084132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:31.028203 containerd[1513]: time="2025-03-20T21:32:31.028170370Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 1.322498812s" Mar 20 21:32:31.028203 containerd[1513]: time="2025-03-20T21:32:31.028201509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 20 21:32:31.028781 containerd[1513]: time="2025-03-20T21:32:31.028740610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 20 21:32:31.984741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532643358.mount: Deactivated successfully. Mar 20 21:32:32.494179 containerd[1513]: time="2025-03-20T21:32:32.494111941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:32.494887 containerd[1513]: time="2025-03-20T21:32:32.494832092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 20 21:32:32.496024 containerd[1513]: time="2025-03-20T21:32:32.495986949Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:32.497731 containerd[1513]: time="2025-03-20T21:32:32.497697478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:32.498210 containerd[1513]: time="2025-03-20T21:32:32.498171377Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.469391303s" Mar 20 21:32:32.498210 containerd[1513]: time="2025-03-20T21:32:32.498207545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 20 21:32:32.498800 containerd[1513]: time="2025-03-20T21:32:32.498770551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 20 21:32:32.675585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:32:32.677247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:32.928049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:32.932548 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:32:33.023578 kubelet[2019]: E0320 21:32:33.023464 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:32:33.029707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:32:33.029919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:32:33.030277 systemd[1]: kubelet.service: Consumed 271ms CPU time, 104.3M memory peak. Mar 20 21:32:33.331173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493893255.mount: Deactivated successfully. Mar 20 21:32:33.990970 containerd[1513]: time="2025-03-20T21:32:33.990911133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:33.991831 containerd[1513]: time="2025-03-20T21:32:33.991779503Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 20 21:32:33.993108 containerd[1513]: time="2025-03-20T21:32:33.993055396Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:33.995488 containerd[1513]: time="2025-03-20T21:32:33.995462091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:33.996331 containerd[1513]: time="2025-03-20T21:32:33.996296116Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.497487263s" Mar 20 21:32:33.996396 containerd[1513]: time="2025-03-20T21:32:33.996333436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 20 21:32:33.996899 containerd[1513]: time="2025-03-20T21:32:33.996854043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 21:32:34.475036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946070127.mount: Deactivated successfully. Mar 20 21:32:34.480296 containerd[1513]: time="2025-03-20T21:32:34.480245961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:32:34.481061 containerd[1513]: time="2025-03-20T21:32:34.480977503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 20 21:32:34.482048 containerd[1513]: time="2025-03-20T21:32:34.482011833Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:32:34.484002 containerd[1513]: time="2025-03-20T21:32:34.483948647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:32:34.484487 containerd[1513]: time="2025-03-20T21:32:34.484452402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.568743ms" Mar 20 21:32:34.484487 containerd[1513]: time="2025-03-20T21:32:34.484481126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 20 21:32:34.485015 containerd[1513]: time="2025-03-20T21:32:34.484985492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 20 21:32:35.026118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917256417.mount: Deactivated successfully. Mar 20 21:32:36.579342 containerd[1513]: time="2025-03-20T21:32:36.579261924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:36.580041 containerd[1513]: time="2025-03-20T21:32:36.579983197Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 20 21:32:36.581672 containerd[1513]: time="2025-03-20T21:32:36.581620920Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:36.584525 containerd[1513]: time="2025-03-20T21:32:36.584487328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:32:36.585793 containerd[1513]: time="2025-03-20T21:32:36.585746680Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.100729178s" Mar 20 21:32:36.585842 containerd[1513]: time="2025-03-20T21:32:36.585795602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 20 21:32:38.809293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:38.809453 systemd[1]: kubelet.service: Consumed 271ms CPU time, 104.3M memory peak. Mar 20 21:32:38.811615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:38.835158 systemd[1]: Reload requested from client PID 2169 ('systemctl') (unit session-7.scope)... Mar 20 21:32:38.835175 systemd[1]: Reloading... Mar 20 21:32:38.928706 zram_generator::config[2217]: No configuration found. Mar 20 21:32:39.038025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:32:39.140041 systemd[1]: Reloading finished in 304 ms. Mar 20 21:32:39.201461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:39.203613 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:32:39.203935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:39.203984 systemd[1]: kubelet.service: Consumed 168ms CPU time, 91.8M memory peak. Mar 20 21:32:39.205714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:39.398721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:39.411020 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:32:39.446225 kubelet[2264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:32:39.446225 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 21:32:39.446225 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:32:39.446746 kubelet[2264]: I0320 21:32:39.446309 2264 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:32:39.691243 kubelet[2264]: I0320 21:32:39.691136 2264 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 21:32:39.691243 kubelet[2264]: I0320 21:32:39.691163 2264 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:32:39.691443 kubelet[2264]: I0320 21:32:39.691420 2264 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 21:32:39.712397 kubelet[2264]: E0320 21:32:39.712366 2264 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:39.713800 kubelet[2264]: I0320 21:32:39.713775 2264 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:32:39.723539 kubelet[2264]: I0320 21:32:39.723513 2264 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:32:39.728514 kubelet[2264]: I0320 21:32:39.728494 2264 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:32:39.729193 kubelet[2264]: I0320 21:32:39.729159 2264 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:32:39.729343 kubelet[2264]: I0320 21:32:39.729188 2264 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:32:39.729447 kubelet[2264]: I0320 21:32:39.729356 2264 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:32:39.729447 kubelet[2264]: I0320 21:32:39.729365 2264 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 21:32:39.729523 kubelet[2264]: I0320 21:32:39.729507 2264 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:32:39.731914 kubelet[2264]: I0320 21:32:39.731892 2264 kubelet.go:446] "Attempting to sync node with API server" Mar 20 21:32:39.731914 kubelet[2264]: I0320 21:32:39.731912 2264 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:32:39.731982 kubelet[2264]: I0320 21:32:39.731939 2264 kubelet.go:352] "Adding apiserver pod source" Mar 20 21:32:39.731982 kubelet[2264]: I0320 21:32:39.731955 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:32:39.735056 kubelet[2264]: I0320 21:32:39.735018 2264 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:32:39.735394 kubelet[2264]: W0320 21:32:39.735343 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Mar 20 21:32:39.735394 kubelet[2264]: I0320 21:32:39.735371 2264 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:32:39.735465 kubelet[2264]: E0320 21:32:39.735398 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:39.736080 kubelet[2264]: W0320 21:32:39.736055 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Mar 20 21:32:39.736135 kubelet[2264]: E0320 21:32:39.736086 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:39.736135 kubelet[2264]: W0320 21:32:39.736104 2264 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:32:39.738182 kubelet[2264]: I0320 21:32:39.738160 2264 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 21:32:39.738236 kubelet[2264]: I0320 21:32:39.738207 2264 server.go:1287] "Started kubelet" Mar 20 21:32:39.738971 kubelet[2264]: I0320 21:32:39.738936 2264 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:32:39.740215 kubelet[2264]: I0320 21:32:39.739587 2264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:32:39.740215 kubelet[2264]: I0320 21:32:39.739804 2264 server.go:490] "Adding debug handlers to kubelet server" Mar 20 21:32:39.740215 kubelet[2264]: I0320 21:32:39.739884 2264 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:32:39.742459 kubelet[2264]: I0320 21:32:39.742442 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:32:39.743022 kubelet[2264]: E0320 21:32:39.742845 2264 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:32:39.743022 kubelet[2264]: I0320 21:32:39.743019 2264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:32:39.743849 kubelet[2264]: I0320 21:32:39.743830 2264 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 21:32:39.743954 kubelet[2264]: E0320 21:32:39.743936 2264 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:32:39.744286 kubelet[2264]: I0320 21:32:39.744269 2264 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:32:39.744349 kubelet[2264]: I0320 21:32:39.744334 2264 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:32:39.744447 kubelet[2264]: E0320 21:32:39.743274 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ea04d4f0da3f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:32:39.738180601 +0000 UTC m=+0.322987338,LastTimestamp:2025-03-20 21:32:39.738180601 +0000 UTC m=+0.322987338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:32:39.745655 kubelet[2264]: E0320 21:32:39.744613 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Mar 20 21:32:39.745655 kubelet[2264]: W0320 21:32:39.744727 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Mar 20 21:32:39.745655 kubelet[2264]: E0320 21:32:39.744767 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:39.745914 kubelet[2264]: I0320 21:32:39.745814 2264 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:32:39.745967 kubelet[2264]: I0320 21:32:39.745925 2264 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:32:39.747962 kubelet[2264]: I0320 21:32:39.747936 2264 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:32:39.760010 kubelet[2264]: I0320 21:32:39.759982 2264 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 21:32:39.760010 kubelet[2264]: I0320 21:32:39.759997 2264 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 21:32:39.760154 kubelet[2264]: I0320 21:32:39.760036 2264 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:32:39.763047 kubelet[2264]: I0320 21:32:39.763009 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:32:39.764204 kubelet[2264]: I0320 21:32:39.764177 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:32:39.764252 kubelet[2264]: I0320 21:32:39.764214 2264 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 21:32:39.764252 kubelet[2264]: I0320 21:32:39.764240 2264 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 21:32:39.764252 kubelet[2264]: I0320 21:32:39.764251 2264 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 21:32:39.764344 kubelet[2264]: E0320 21:32:39.764312 2264 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:32:39.844398 kubelet[2264]: E0320 21:32:39.844358 2264 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:32:39.864555 kubelet[2264]: E0320 21:32:39.864527 2264 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:32:39.945571 kubelet[2264]: E0320 21:32:39.945474 2264 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:32:39.945752 kubelet[2264]: E0320 21:32:39.945718 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Mar 20 21:32:40.033349 kubelet[2264]: I0320 21:32:40.033327 2264 policy_none.go:49] "None policy: Start" Mar 20 21:32:40.033349 kubelet[2264]: I0320 21:32:40.033351 2264 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 21:32:40.033426 kubelet[2264]: I0320 21:32:40.033366 2264 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:32:40.033426 kubelet[2264]: W0320 21:32:40.033365 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Mar 20 21:32:40.033477 kubelet[2264]: E0320 21:32:40.033421 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:40.039665 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:32:40.045580 kubelet[2264]: E0320 21:32:40.045556 2264 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:32:40.050372 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:32:40.053029 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:32:40.065306 kubelet[2264]: E0320 21:32:40.065271 2264 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 21:32:40.065432 kubelet[2264]: I0320 21:32:40.065415 2264 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:32:40.065685 kubelet[2264]: I0320 21:32:40.065618 2264 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:32:40.065685 kubelet[2264]: I0320 21:32:40.065652 2264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:32:40.065919 kubelet[2264]: I0320 21:32:40.065897 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:32:40.066869 kubelet[2264]: E0320 21:32:40.066847 2264 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 21:32:40.066912 kubelet[2264]: E0320 21:32:40.066887 2264 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:32:40.167980 kubelet[2264]: I0320 21:32:40.167914 2264 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:32:40.168316 kubelet[2264]: E0320 21:32:40.168292 2264 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Mar 20 21:32:40.347358 kubelet[2264]: E0320 21:32:40.347192 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Mar 20 21:32:40.370397 kubelet[2264]: I0320 21:32:40.370366 2264 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:32:40.370789 kubelet[2264]: E0320 21:32:40.370749 2264 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Mar 20 21:32:40.474419 systemd[1]: Created slice kubepods-burstable-pod4f7b5afe62082ab238798c7bdae1002c.slice - libcontainer container kubepods-burstable-pod4f7b5afe62082ab238798c7bdae1002c.slice. Mar 20 21:32:40.484486 kubelet[2264]: E0320 21:32:40.484452 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:40.487031 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 20 21:32:40.502946 kubelet[2264]: E0320 21:32:40.502908 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:40.505826 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 20 21:32:40.507501 kubelet[2264]: E0320 21:32:40.507470 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:40.546877 kubelet[2264]: I0320 21:32:40.546833 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:40.546920 kubelet[2264]: I0320 21:32:40.546879 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:40.546920 kubelet[2264]: I0320 21:32:40.546903 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:40.546977 kubelet[2264]: I0320 21:32:40.546938 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:40.547000 kubelet[2264]: I0320 21:32:40.546981 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:40.547029 kubelet[2264]: I0320 21:32:40.547004 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:40.547029 kubelet[2264]: I0320 21:32:40.547018 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:40.547077 kubelet[2264]: I0320 21:32:40.547038 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:40.547077 kubelet[2264]: I0320 21:32:40.547052 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:40.772599 kubelet[2264]: I0320 21:32:40.772469 2264 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:32:40.772866 kubelet[2264]: E0320 21:32:40.772819 2264 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Mar 20 21:32:40.785219 kubelet[2264]: E0320 21:32:40.785188 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.785990 containerd[1513]: time="2025-03-20T21:32:40.785933170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f7b5afe62082ab238798c7bdae1002c,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:40.804226 kubelet[2264]: E0320 21:32:40.804180 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.804770 containerd[1513]: time="2025-03-20T21:32:40.804721769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:40.808221 kubelet[2264]: E0320 21:32:40.808163 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.808619 containerd[1513]: time="2025-03-20T21:32:40.808581179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:40.809716 containerd[1513]: time="2025-03-20T21:32:40.809679750Z" level=info msg="connecting to shim 7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919" address="unix:///run/containerd/s/c99180fedb0fdd1cfef6d516a991684af856df0bab35f26eaffc2db9bacf75bd" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:40.833833 systemd[1]: Started cri-containerd-7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919.scope - libcontainer container 7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919. Mar 20 21:32:40.842528 containerd[1513]: time="2025-03-20T21:32:40.842445271Z" level=info msg="connecting to shim 8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41" address="unix:///run/containerd/s/47ee8871874f2a3b33779da790fbc88acabaebc31dcfa21a0c3a4d0a89563b20" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:40.846494 containerd[1513]: time="2025-03-20T21:32:40.846449863Z" level=info msg="connecting to shim c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b" address="unix:///run/containerd/s/46a622ea041fd3cad8035acf1ead0874dc71388c58fbc289d11bd69229745a7a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:40.869796 systemd[1]: Started cri-containerd-8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41.scope - libcontainer container 8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41. Mar 20 21:32:40.874479 systemd[1]: Started cri-containerd-c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b.scope - libcontainer container c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b. Mar 20 21:32:40.885374 containerd[1513]: time="2025-03-20T21:32:40.885330597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f7b5afe62082ab238798c7bdae1002c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919\"" Mar 20 21:32:40.886755 kubelet[2264]: E0320 21:32:40.886528 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.888259 containerd[1513]: time="2025-03-20T21:32:40.888232170Z" level=info msg="CreateContainer within sandbox \"7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:32:40.896675 containerd[1513]: time="2025-03-20T21:32:40.896477860Z" level=info msg="Container c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:40.907288 containerd[1513]: time="2025-03-20T21:32:40.907244899Z" level=info msg="CreateContainer within sandbox \"7bc760953f70fef69f825718fcf137fd5cc394b6f0def3656eacdf1a5167f919\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063\"" Mar 20 21:32:40.907830 containerd[1513]: time="2025-03-20T21:32:40.907804469Z" level=info msg="StartContainer for \"c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063\"" Mar 20 21:32:40.909548 containerd[1513]: time="2025-03-20T21:32:40.909508175Z" level=info msg="connecting to shim c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063" address="unix:///run/containerd/s/c99180fedb0fdd1cfef6d516a991684af856df0bab35f26eaffc2db9bacf75bd" protocol=ttrpc version=3 Mar 20 21:32:40.913152 kubelet[2264]: W0320 21:32:40.913120 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Mar 20 21:32:40.913229 kubelet[2264]: E0320 21:32:40.913162 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:32:40.918030 containerd[1513]: time="2025-03-20T21:32:40.917903115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41\"" Mar 20 21:32:40.918782 kubelet[2264]: E0320 21:32:40.918763 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.922222 containerd[1513]: time="2025-03-20T21:32:40.922069882Z" level=info msg="CreateContainer within sandbox \"8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:32:40.923492 containerd[1513]: time="2025-03-20T21:32:40.923457074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b\"" Mar 20 21:32:40.924230 kubelet[2264]: E0320 21:32:40.924151 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:40.925526 containerd[1513]: time="2025-03-20T21:32:40.925467747Z" level=info msg="CreateContainer within sandbox \"c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:32:40.931838 systemd[1]: Started cri-containerd-c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063.scope - libcontainer container c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063. Mar 20 21:32:40.932339 containerd[1513]: time="2025-03-20T21:32:40.932152778Z" level=info msg="Container d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:40.938259 containerd[1513]: time="2025-03-20T21:32:40.938201907Z" level=info msg="Container e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:40.943562 containerd[1513]: time="2025-03-20T21:32:40.943539260Z" level=info msg="CreateContainer within sandbox \"8b3d3af31c2d213b4dd67b4d06a9a5abaaff41e4a243d93f3ed9953319450e41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7\"" Mar 20 21:32:40.944098 containerd[1513]: time="2025-03-20T21:32:40.944076578Z" level=info msg="StartContainer for \"d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7\"" Mar 20 21:32:40.945055 containerd[1513]: time="2025-03-20T21:32:40.945001884Z" level=info msg="connecting to shim d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7" address="unix:///run/containerd/s/47ee8871874f2a3b33779da790fbc88acabaebc31dcfa21a0c3a4d0a89563b20" protocol=ttrpc version=3 Mar 20 21:32:40.946551 containerd[1513]: time="2025-03-20T21:32:40.946504383Z" level=info msg="CreateContainer within sandbox \"c2c3c7eee02617da5a45afb34ae0647070681684be4a7ff57101eb2491d9652b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc\"" Mar 20 21:32:40.947139 containerd[1513]: time="2025-03-20T21:32:40.947108657Z" level=info msg="StartContainer for \"e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc\"" Mar 20 21:32:40.948309 containerd[1513]: time="2025-03-20T21:32:40.948275075Z" level=info msg="connecting to shim e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc" address="unix:///run/containerd/s/46a622ea041fd3cad8035acf1ead0874dc71388c58fbc289d11bd69229745a7a" protocol=ttrpc version=3 Mar 20 21:32:40.964981 systemd[1]: Started cri-containerd-d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7.scope - libcontainer container d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7. Mar 20 21:32:40.968753 systemd[1]: Started cri-containerd-e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc.scope - libcontainer container e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc. Mar 20 21:32:40.980831 containerd[1513]: time="2025-03-20T21:32:40.980766521Z" level=info msg="StartContainer for \"c7b99402229b3d7938db81bec4cd00d76ad464f30b522551505402a570d34063\" returns successfully" Mar 20 21:32:41.022789 containerd[1513]: time="2025-03-20T21:32:41.022512610Z" level=info msg="StartContainer for \"e2d494675e6320ffe25178d78b1e0db7088201f7c0cb028f90e640e3cb9a69bc\" returns successfully" Mar 20 21:32:41.030512 containerd[1513]: time="2025-03-20T21:32:41.030482462Z" level=info msg="StartContainer for \"d20586d097d4a00272b3d1cb7e6ae6a1f5658a56618f49a970f67e623c0b5fd7\" returns successfully" Mar 20 21:32:41.574189 kubelet[2264]: I0320 21:32:41.574144 2264 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:32:41.772888 kubelet[2264]: E0320 21:32:41.772847 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:41.772999 kubelet[2264]: E0320 21:32:41.772986 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:41.775595 kubelet[2264]: E0320 21:32:41.775400 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:41.775595 kubelet[2264]: E0320 21:32:41.775531 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:41.777107 kubelet[2264]: E0320 21:32:41.777078 2264 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 21:32:41.777243 kubelet[2264]: E0320 21:32:41.777170 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:42.088670 kubelet[2264]: I0320 21:32:42.086375 2264 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 21:32:42.088670 kubelet[2264]: E0320 21:32:42.086421 2264 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 20 21:32:42.088670 kubelet[2264]: E0320 21:32:42.086656 2264 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182ea04d4f0da3f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:32:39.738180601 +0000 UTC m=+0.322987338,LastTimestamp:2025-03-20 21:32:39.738180601 +0000 UTC m=+0.322987338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:32:42.144498 kubelet[2264]: I0320 21:32:42.144415 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:42.152653 kubelet[2264]: E0320 21:32:42.152588 2264 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:42.152653 kubelet[2264]: I0320 21:32:42.152629 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:42.154317 kubelet[2264]: E0320 21:32:42.153954 2264 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:42.154317 kubelet[2264]: I0320 21:32:42.153984 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:42.155408 kubelet[2264]: E0320 21:32:42.155386 2264 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:42.735912 kubelet[2264]: I0320 21:32:42.735879 2264 apiserver.go:52] "Watching apiserver" Mar 20 21:32:42.745108 kubelet[2264]: I0320 21:32:42.745064 2264 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:32:42.778522 kubelet[2264]: I0320 21:32:42.778482 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:42.778782 kubelet[2264]: I0320 21:32:42.778626 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:42.779829 kubelet[2264]: E0320 21:32:42.779797 2264 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:42.779954 kubelet[2264]: E0320 21:32:42.779854 2264 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:42.779954 kubelet[2264]: E0320 21:32:42.779930 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:42.780011 kubelet[2264]: E0320 21:32:42.779961 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:43.437035 kubelet[2264]: I0320 21:32:43.436999 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:43.441269 kubelet[2264]: E0320 21:32:43.441227 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:43.780465 kubelet[2264]: I0320 21:32:43.780157 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:43.780465 kubelet[2264]: E0320 21:32:43.780161 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:43.780465 kubelet[2264]: I0320 21:32:43.780293 2264 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:43.783310 kubelet[2264]: E0320 21:32:43.783279 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:43.787610 kubelet[2264]: E0320 21:32:43.785323 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:43.844290 systemd[1]: Reload requested from client PID 2539 ('systemctl') (unit session-7.scope)... Mar 20 21:32:43.844309 systemd[1]: Reloading... Mar 20 21:32:43.925679 zram_generator::config[2586]: No configuration found. Mar 20 21:32:44.032519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:32:44.148177 systemd[1]: Reloading finished in 303 ms. Mar 20 21:32:44.174767 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:44.199980 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:32:44.200255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:44.200295 systemd[1]: kubelet.service: Consumed 811ms CPU time, 131.2M memory peak. Mar 20 21:32:44.203222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:32:44.406974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:32:44.418085 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:32:44.456777 kubelet[2628]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:32:44.456777 kubelet[2628]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 21:32:44.456777 kubelet[2628]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:32:44.457197 kubelet[2628]: I0320 21:32:44.456833 2628 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:32:44.464594 kubelet[2628]: I0320 21:32:44.464547 2628 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 21:32:44.464594 kubelet[2628]: I0320 21:32:44.464586 2628 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:32:44.464891 kubelet[2628]: I0320 21:32:44.464868 2628 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 21:32:44.466119 kubelet[2628]: I0320 21:32:44.466094 2628 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:32:44.468195 kubelet[2628]: I0320 21:32:44.468167 2628 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:32:44.472226 kubelet[2628]: I0320 21:32:44.472169 2628 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:32:44.477109 kubelet[2628]: I0320 21:32:44.477087 2628 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:32:44.477340 kubelet[2628]: I0320 21:32:44.477304 2628 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:32:44.477493 kubelet[2628]: I0320 21:32:44.477333 2628 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:32:44.477574 kubelet[2628]: I0320 21:32:44.477494 2628 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:32:44.477574 kubelet[2628]: I0320 21:32:44.477504 2628 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 21:32:44.477574 kubelet[2628]: I0320 21:32:44.477541 2628 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:32:44.477720 kubelet[2628]: I0320 21:32:44.477706 2628 kubelet.go:446] "Attempting to sync node with API server" Mar 20 21:32:44.477761 kubelet[2628]: I0320 21:32:44.477720 2628 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:32:44.477761 kubelet[2628]: I0320 21:32:44.477744 2628 kubelet.go:352] "Adding apiserver pod source" Mar 20 21:32:44.477761 kubelet[2628]: I0320 21:32:44.477755 2628 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:32:44.478831 kubelet[2628]: I0320 21:32:44.478794 2628 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:32:44.480096 kubelet[2628]: I0320 21:32:44.479867 2628 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:32:44.480435 kubelet[2628]: I0320 21:32:44.480413 2628 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 21:32:44.480503 kubelet[2628]: I0320 21:32:44.480454 2628 server.go:1287] "Started kubelet" Mar 20 21:32:44.482761 kubelet[2628]: I0320 21:32:44.482731 2628 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:32:44.484406 kubelet[2628]: I0320 21:32:44.484344 2628 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:32:44.485922 kubelet[2628]: I0320 21:32:44.485883 2628 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:32:44.486951 kubelet[2628]: I0320 21:32:44.486297 2628 server.go:490] "Adding debug handlers to kubelet server" Mar 20 21:32:44.486951 kubelet[2628]: I0320 21:32:44.486876 2628 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:32:44.487837 kubelet[2628]: I0320 21:32:44.487804 2628 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 21:32:44.488001 kubelet[2628]: E0320 21:32:44.487974 2628 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:32:44.488301 kubelet[2628]: I0320 21:32:44.488278 2628 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:32:44.488416 kubelet[2628]: I0320 21:32:44.488397 2628 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:32:44.488934 kubelet[2628]: I0320 21:32:44.488912 2628 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:32:44.494222 kubelet[2628]: I0320 21:32:44.494073 2628 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:32:44.494222 kubelet[2628]: I0320 21:32:44.494097 2628 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:32:44.494361 kubelet[2628]: I0320 21:32:44.494233 2628 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:32:44.498338 kubelet[2628]: I0320 21:32:44.498224 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:32:44.499672 kubelet[2628]: I0320 21:32:44.499603 2628 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:32:44.499672 kubelet[2628]: I0320 21:32:44.499657 2628 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 21:32:44.499749 kubelet[2628]: I0320 21:32:44.499679 2628 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 21:32:44.499749 kubelet[2628]: I0320 21:32:44.499688 2628 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 21:32:44.499749 kubelet[2628]: E0320 21:32:44.499740 2628 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:32:44.533835 kubelet[2628]: I0320 21:32:44.533800 2628 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 21:32:44.533835 kubelet[2628]: I0320 21:32:44.533820 2628 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 21:32:44.533835 kubelet[2628]: I0320 21:32:44.533838 2628 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:32:44.534020 kubelet[2628]: I0320 21:32:44.533984 2628 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:32:44.534020 kubelet[2628]: I0320 21:32:44.533995 2628 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:32:44.534020 kubelet[2628]: I0320 21:32:44.534012 2628 policy_none.go:49] "None policy: Start" Mar 20 21:32:44.534020 kubelet[2628]: I0320 21:32:44.534020 2628 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 21:32:44.534124 kubelet[2628]: I0320 21:32:44.534031 2628 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:32:44.534147 kubelet[2628]: I0320 21:32:44.534126 2628 state_mem.go:75] "Updated machine memory state" Mar 20 21:32:44.538168 kubelet[2628]: I0320 21:32:44.538143 2628 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:32:44.538503 kubelet[2628]: I0320 21:32:44.538307 2628 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:32:44.538503 kubelet[2628]: I0320 21:32:44.538325 2628 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:32:44.538503 kubelet[2628]: I0320 21:32:44.538499 2628 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:32:44.539473 kubelet[2628]: E0320 21:32:44.539439 2628 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 21:32:44.600478 kubelet[2628]: I0320 21:32:44.600453 2628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:44.600673 kubelet[2628]: I0320 21:32:44.600513 2628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:44.600764 kubelet[2628]: I0320 21:32:44.600514 2628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.606068 kubelet[2628]: E0320 21:32:44.606040 2628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:44.606502 kubelet[2628]: E0320 21:32:44.606455 2628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:44.606722 kubelet[2628]: E0320 21:32:44.606695 2628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.643021 kubelet[2628]: I0320 21:32:44.642985 2628 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 21:32:44.648159 kubelet[2628]: I0320 21:32:44.648139 2628 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 20 21:32:44.648235 kubelet[2628]: I0320 21:32:44.648201 2628 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 21:32:44.789069 kubelet[2628]: I0320 21:32:44.788917 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.789069 kubelet[2628]: I0320 21:32:44.788960 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.789069 kubelet[2628]: I0320 21:32:44.788986 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:44.789069 kubelet[2628]: I0320 21:32:44.789004 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:44.789069 kubelet[2628]: I0320 21:32:44.789020 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.789318 kubelet[2628]: I0320 21:32:44.789035 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.789318 kubelet[2628]: I0320 21:32:44.789053 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:32:44.789318 kubelet[2628]: I0320 21:32:44.789070 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:32:44.789318 kubelet[2628]: I0320 21:32:44.789084 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7b5afe62082ab238798c7bdae1002c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f7b5afe62082ab238798c7bdae1002c\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:32:44.842806 sudo[2664]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 21:32:44.843161 sudo[2664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 21:32:44.907293 kubelet[2628]: E0320 21:32:44.907236 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:44.907437 kubelet[2628]: E0320 21:32:44.907298 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:44.907670 kubelet[2628]: E0320 21:32:44.907551 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:45.298567 sudo[2664]: pam_unix(sudo:session): session closed for user root Mar 20 21:32:45.479317 kubelet[2628]: I0320 21:32:45.479266 2628 apiserver.go:52] "Watching apiserver" Mar 20 21:32:45.488438 kubelet[2628]: I0320 21:32:45.488391 2628 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:32:45.512273 kubelet[2628]: E0320 21:32:45.512237 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:45.513079 kubelet[2628]: E0320 21:32:45.512352 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:45.513079 kubelet[2628]: E0320 21:32:45.513023 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:45.541785 kubelet[2628]: I0320 21:32:45.541699 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.541665744 podStartE2EDuration="2.541665744s" podCreationTimestamp="2025-03-20 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:32:45.541411877 +0000 UTC m=+1.119087108" watchObservedRunningTime="2025-03-20 21:32:45.541665744 +0000 UTC m=+1.119340975" Mar 20 21:32:45.541990 kubelet[2628]: I0320 21:32:45.541825 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.541817438 podStartE2EDuration="2.541817438s" podCreationTimestamp="2025-03-20 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:32:45.534817676 +0000 UTC m=+1.112492897" watchObservedRunningTime="2025-03-20 21:32:45.541817438 +0000 UTC m=+1.119492659" Mar 20 21:32:45.555600 kubelet[2628]: I0320 21:32:45.555375 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.555353343 podStartE2EDuration="2.555353343s" podCreationTimestamp="2025-03-20 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:32:45.548743803 +0000 UTC m=+1.126419024" watchObservedRunningTime="2025-03-20 21:32:45.555353343 +0000 UTC m=+1.133028564" Mar 20 21:32:46.513973 kubelet[2628]: E0320 21:32:46.513936 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:46.515260 kubelet[2628]: E0320 21:32:46.514420 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:46.515260 kubelet[2628]: E0320 21:32:46.514591 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:46.627004 sudo[1713]: pam_unix(sudo:session): session closed for user root Mar 20 21:32:46.628759 sshd[1712]: Connection closed by 10.0.0.1 port 55254 Mar 20 21:32:46.629130 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Mar 20 21:32:46.633598 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:55254.service: Deactivated successfully. Mar 20 21:32:46.636032 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:32:46.636248 systemd[1]: session-7.scope: Consumed 4.582s CPU time, 258.7M memory peak. Mar 20 21:32:46.637542 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:32:46.638397 systemd-logind[1496]: Removed session 7. Mar 20 21:32:47.515590 kubelet[2628]: E0320 21:32:47.515539 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:47.968912 kubelet[2628]: E0320 21:32:47.968810 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:50.490859 kubelet[2628]: I0320 21:32:50.490803 2628 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:32:50.491303 containerd[1513]: time="2025-03-20T21:32:50.491149228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:32:50.491591 kubelet[2628]: I0320 21:32:50.491368 2628 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:32:51.492320 systemd[1]: Created slice kubepods-besteffort-poda0b83d94_da55_4d4a_a27e_912a34620f5b.slice - libcontainer container kubepods-besteffort-poda0b83d94_da55_4d4a_a27e_912a34620f5b.slice. Mar 20 21:32:51.502617 systemd[1]: Created slice kubepods-burstable-pod69fb0054_3d30_4a8c_9c34_ae18758ba7e7.slice - libcontainer container kubepods-burstable-pod69fb0054_3d30_4a8c_9c34_ae18758ba7e7.slice. Mar 20 21:32:51.527471 kubelet[2628]: I0320 21:32:51.527427 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0b83d94-da55-4d4a-a27e-912a34620f5b-xtables-lock\") pod \"kube-proxy-29gqz\" (UID: \"a0b83d94-da55-4d4a-a27e-912a34620f5b\") " pod="kube-system/kube-proxy-29gqz" Mar 20 21:32:51.527471 kubelet[2628]: I0320 21:32:51.527462 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hostproc\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527471 kubelet[2628]: I0320 21:32:51.527477 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj68s\" (UniqueName: \"kubernetes.io/projected/a0b83d94-da55-4d4a-a27e-912a34620f5b-kube-api-access-vj68s\") pod \"kube-proxy-29gqz\" (UID: \"a0b83d94-da55-4d4a-a27e-912a34620f5b\") " pod="kube-system/kube-proxy-29gqz" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527490 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-run\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527503 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-cgroup\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527516 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-clustermesh-secrets\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527548 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-config-path\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527575 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-net\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.527989 kubelet[2628]: I0320 21:32:51.527596 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hubble-tls\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527628 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-bpf-maps\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527661 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0b83d94-da55-4d4a-a27e-912a34620f5b-kube-proxy\") pod \"kube-proxy-29gqz\" (UID: \"a0b83d94-da55-4d4a-a27e-912a34620f5b\") " pod="kube-system/kube-proxy-29gqz" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527676 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-xtables-lock\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527689 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-kernel\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527704 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl55\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-kube-api-access-fvl55\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528126 kubelet[2628]: I0320 21:32:51.527717 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cni-path\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528256 kubelet[2628]: I0320 21:32:51.527729 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-lib-modules\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528256 kubelet[2628]: I0320 21:32:51.527742 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-etc-cni-netd\") pod \"cilium-d5k7l\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " pod="kube-system/cilium-d5k7l" Mar 20 21:32:51.528256 kubelet[2628]: I0320 21:32:51.527754 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0b83d94-da55-4d4a-a27e-912a34620f5b-lib-modules\") pod \"kube-proxy-29gqz\" (UID: \"a0b83d94-da55-4d4a-a27e-912a34620f5b\") " pod="kube-system/kube-proxy-29gqz" Mar 20 21:32:51.616343 systemd[1]: Created slice kubepods-besteffort-pod02d67544_dbf2_4617_9c8c_6762cdf49abc.slice - libcontainer container kubepods-besteffort-pod02d67544_dbf2_4617_9c8c_6762cdf49abc.slice. Mar 20 21:32:51.628557 kubelet[2628]: I0320 21:32:51.628506 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nks5\" (UniqueName: \"kubernetes.io/projected/02d67544-dbf2-4617-9c8c-6762cdf49abc-kube-api-access-9nks5\") pod \"cilium-operator-6c4d7847fc-bd8gj\" (UID: \"02d67544-dbf2-4617-9c8c-6762cdf49abc\") " pod="kube-system/cilium-operator-6c4d7847fc-bd8gj" Mar 20 21:32:51.628670 kubelet[2628]: I0320 21:32:51.628652 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d67544-dbf2-4617-9c8c-6762cdf49abc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bd8gj\" (UID: \"02d67544-dbf2-4617-9c8c-6762cdf49abc\") " pod="kube-system/cilium-operator-6c4d7847fc-bd8gj" Mar 20 21:32:51.799979 kubelet[2628]: E0320 21:32:51.799868 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:51.800691 containerd[1513]: time="2025-03-20T21:32:51.800519540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29gqz,Uid:a0b83d94-da55-4d4a-a27e-912a34620f5b,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:51.807919 kubelet[2628]: E0320 21:32:51.807897 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:51.808592 containerd[1513]: time="2025-03-20T21:32:51.808554324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5k7l,Uid:69fb0054-3d30-4a8c-9c34-ae18758ba7e7,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:51.834467 containerd[1513]: time="2025-03-20T21:32:51.833237081Z" level=info msg="connecting to shim 2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30" address="unix:///run/containerd/s/5a88a1026c60066303eaf960771beee257727b3c4b487b4f576edc31b4db2148" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:51.834467 containerd[1513]: time="2025-03-20T21:32:51.833997647Z" level=info msg="connecting to shim 47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:51.858774 systemd[1]: Started cri-containerd-2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30.scope - libcontainer container 2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30. Mar 20 21:32:51.862022 systemd[1]: Started cri-containerd-47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf.scope - libcontainer container 47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf. Mar 20 21:32:51.909774 containerd[1513]: time="2025-03-20T21:32:51.909724807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29gqz,Uid:a0b83d94-da55-4d4a-a27e-912a34620f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30\"" Mar 20 21:32:51.910616 kubelet[2628]: E0320 21:32:51.910576 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:51.911341 containerd[1513]: time="2025-03-20T21:32:51.911311724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5k7l,Uid:69fb0054-3d30-4a8c-9c34-ae18758ba7e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\"" Mar 20 21:32:51.912342 kubelet[2628]: E0320 21:32:51.912266 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:51.912933 containerd[1513]: time="2025-03-20T21:32:51.912872763Z" level=info msg="CreateContainer within sandbox \"2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:32:51.913722 containerd[1513]: time="2025-03-20T21:32:51.913682883Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 21:32:51.919045 kubelet[2628]: E0320 21:32:51.919018 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:51.919439 containerd[1513]: time="2025-03-20T21:32:51.919403495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bd8gj,Uid:02d67544-dbf2-4617-9c8c-6762cdf49abc,Namespace:kube-system,Attempt:0,}" Mar 20 21:32:51.929546 containerd[1513]: time="2025-03-20T21:32:51.929502071Z" level=info msg="Container 5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:32:51.940937 containerd[1513]: time="2025-03-20T21:32:51.940899152Z" level=info msg="CreateContainer within sandbox \"2a4bf32d78c9669e3d7c6d5b3d69753ea438a6ee4d4c9f2047d314132454ec30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440\"" Mar 20 21:32:51.941506 containerd[1513]: time="2025-03-20T21:32:51.941478639Z" level=info msg="StartContainer for \"5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440\"" Mar 20 21:32:51.943609 containerd[1513]: time="2025-03-20T21:32:51.943382201Z" level=info msg="connecting to shim 5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440" address="unix:///run/containerd/s/5a88a1026c60066303eaf960771beee257727b3c4b487b4f576edc31b4db2148" protocol=ttrpc version=3 Mar 20 21:32:51.959141 containerd[1513]: time="2025-03-20T21:32:51.958580564Z" level=info msg="connecting to shim 738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d" address="unix:///run/containerd/s/dec4afb44c13c41188eb5d5abfc19e1ce54193118e72cbe33d5069f757be925a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:32:51.965858 systemd[1]: Started cri-containerd-5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440.scope - libcontainer container 5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440. Mar 20 21:32:51.984785 systemd[1]: Started cri-containerd-738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d.scope - libcontainer container 738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d. Mar 20 21:32:52.013821 containerd[1513]: time="2025-03-20T21:32:52.013775403Z" level=info msg="StartContainer for \"5b8ce0cecac311b0ba510d3ab0cbd5da9bd173713f34ed67b4fd955774aca440\" returns successfully" Mar 20 21:32:52.028355 containerd[1513]: time="2025-03-20T21:32:52.028299894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bd8gj,Uid:02d67544-dbf2-4617-9c8c-6762cdf49abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\"" Mar 20 21:32:52.029140 kubelet[2628]: E0320 21:32:52.029085 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:52.524288 kubelet[2628]: E0320 21:32:52.524250 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:52.533467 kubelet[2628]: I0320 21:32:52.533388 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29gqz" podStartSLOduration=1.533363929 podStartE2EDuration="1.533363929s" podCreationTimestamp="2025-03-20 21:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:32:52.533277483 +0000 UTC m=+8.110952704" watchObservedRunningTime="2025-03-20 21:32:52.533363929 +0000 UTC m=+8.111039170" Mar 20 21:32:55.865484 kubelet[2628]: E0320 21:32:55.865432 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:56.777294 kubelet[2628]: E0320 21:32:56.777258 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:57.534376 kubelet[2628]: E0320 21:32:57.534312 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:57.977028 kubelet[2628]: E0320 21:32:57.975228 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:58.536530 kubelet[2628]: E0320 21:32:58.536479 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:32:59.010354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143635245.mount: Deactivated successfully. Mar 20 21:33:03.381975 containerd[1513]: time="2025-03-20T21:33:03.381916487Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:33:03.382808 containerd[1513]: time="2025-03-20T21:33:03.382761801Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 20 21:33:03.383965 containerd[1513]: time="2025-03-20T21:33:03.383932111Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:33:03.385424 containerd[1513]: time="2025-03-20T21:33:03.385385198Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.47167233s" Mar 20 21:33:03.385424 containerd[1513]: time="2025-03-20T21:33:03.385416498Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 20 21:33:03.391413 containerd[1513]: time="2025-03-20T21:33:03.391380465Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 21:33:03.406918 containerd[1513]: time="2025-03-20T21:33:03.406862410Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:33:03.415647 containerd[1513]: time="2025-03-20T21:33:03.415609547Z" level=info msg="Container eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:03.421652 containerd[1513]: time="2025-03-20T21:33:03.421610564Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\"" Mar 20 21:33:03.424255 containerd[1513]: time="2025-03-20T21:33:03.424134142Z" level=info msg="StartContainer for \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\"" Mar 20 21:33:03.424876 containerd[1513]: time="2025-03-20T21:33:03.424844931Z" level=info msg="connecting to shim eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" protocol=ttrpc version=3 Mar 20 21:33:03.446761 systemd[1]: Started cri-containerd-eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550.scope - libcontainer container eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550. Mar 20 21:33:03.476094 containerd[1513]: time="2025-03-20T21:33:03.476056742Z" level=info msg="StartContainer for \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" returns successfully" Mar 20 21:33:03.486758 systemd[1]: cri-containerd-eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550.scope: Deactivated successfully. Mar 20 21:33:03.488336 containerd[1513]: time="2025-03-20T21:33:03.488294011Z" level=info msg="received exit event container_id:\"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" id:\"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" pid:3049 exited_at:{seconds:1742506383 nanos:487894383}" Mar 20 21:33:03.488336 containerd[1513]: time="2025-03-20T21:33:03.488322034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" id:\"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" pid:3049 exited_at:{seconds:1742506383 nanos:487894383}" Mar 20 21:33:03.508999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550-rootfs.mount: Deactivated successfully. Mar 20 21:33:03.556318 kubelet[2628]: E0320 21:33:03.556279 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:04.453803 update_engine[1501]: I20250320 21:33:04.453682 1501 update_attempter.cc:509] Updating boot flags... Mar 20 21:33:04.481783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3089) Mar 20 21:33:04.528762 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3090) Mar 20 21:33:04.569075 kubelet[2628]: E0320 21:33:04.568884 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:04.571779 containerd[1513]: time="2025-03-20T21:33:04.571713408Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:33:04.587434 containerd[1513]: time="2025-03-20T21:33:04.584844807Z" level=info msg="Container 33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:04.590318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935791167.mount: Deactivated successfully. Mar 20 21:33:04.592781 containerd[1513]: time="2025-03-20T21:33:04.592739937Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\"" Mar 20 21:33:04.593170 containerd[1513]: time="2025-03-20T21:33:04.593146127Z" level=info msg="StartContainer for \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\"" Mar 20 21:33:04.594308 containerd[1513]: time="2025-03-20T21:33:04.594215285Z" level=info msg="connecting to shim 33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" protocol=ttrpc version=3 Mar 20 21:33:04.616779 systemd[1]: Started cri-containerd-33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d.scope - libcontainer container 33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d. Mar 20 21:33:04.646609 containerd[1513]: time="2025-03-20T21:33:04.646550467Z" level=info msg="StartContainer for \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" returns successfully" Mar 20 21:33:04.660294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:33:04.660978 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:33:04.661256 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:33:04.663274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:33:04.664495 containerd[1513]: time="2025-03-20T21:33:04.664443254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" id:\"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" pid:3109 exited_at:{seconds:1742506384 nanos:664066941}" Mar 20 21:33:04.664602 containerd[1513]: time="2025-03-20T21:33:04.664578290Z" level=info msg="received exit event container_id:\"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" id:\"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" pid:3109 exited_at:{seconds:1742506384 nanos:664066941}" Mar 20 21:33:04.665904 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:33:04.666829 systemd[1]: cri-containerd-33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d.scope: Deactivated successfully. Mar 20 21:33:04.687251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:33:05.574170 kubelet[2628]: E0320 21:33:05.573958 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:05.577974 containerd[1513]: time="2025-03-20T21:33:05.576032795Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:33:05.587032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d-rootfs.mount: Deactivated successfully. Mar 20 21:33:05.590082 containerd[1513]: time="2025-03-20T21:33:05.589124201Z" level=info msg="Container 986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:05.601312 containerd[1513]: time="2025-03-20T21:33:05.601217155Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\"" Mar 20 21:33:05.602083 containerd[1513]: time="2025-03-20T21:33:05.601750535Z" level=info msg="StartContainer for \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\"" Mar 20 21:33:05.603243 containerd[1513]: time="2025-03-20T21:33:05.603209260Z" level=info msg="connecting to shim 986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" protocol=ttrpc version=3 Mar 20 21:33:05.630781 systemd[1]: Started cri-containerd-986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608.scope - libcontainer container 986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608. Mar 20 21:33:05.659346 containerd[1513]: time="2025-03-20T21:33:05.659282029Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:33:05.660624 containerd[1513]: time="2025-03-20T21:33:05.660570030Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 20 21:33:05.663309 containerd[1513]: time="2025-03-20T21:33:05.663193742Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:33:05.669427 containerd[1513]: time="2025-03-20T21:33:05.669387377Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.277968591s" Mar 20 21:33:05.669706 containerd[1513]: time="2025-03-20T21:33:05.669433996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 20 21:33:05.673322 containerd[1513]: time="2025-03-20T21:33:05.673237332Z" level=info msg="CreateContainer within sandbox \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 21:33:05.680377 systemd[1]: cri-containerd-986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608.scope: Deactivated successfully. Mar 20 21:33:05.710108 containerd[1513]: time="2025-03-20T21:33:05.681671604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" id:\"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" pid:3167 exited_at:{seconds:1742506385 nanos:681269563}" Mar 20 21:33:05.753981 containerd[1513]: time="2025-03-20T21:33:05.753919262Z" level=info msg="received exit event container_id:\"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" id:\"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" pid:3167 exited_at:{seconds:1742506385 nanos:681269563}" Mar 20 21:33:05.761725 containerd[1513]: time="2025-03-20T21:33:05.759734811Z" level=info msg="Container 7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:05.763974 containerd[1513]: time="2025-03-20T21:33:05.763909021Z" level=info msg="StartContainer for \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" returns successfully" Mar 20 21:33:05.769696 containerd[1513]: time="2025-03-20T21:33:05.769665799Z" level=info msg="CreateContainer within sandbox \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\"" Mar 20 21:33:05.770354 containerd[1513]: time="2025-03-20T21:33:05.770313357Z" level=info msg="StartContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\"" Mar 20 21:33:05.771781 containerd[1513]: time="2025-03-20T21:33:05.771731363Z" level=info msg="connecting to shim 7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c" address="unix:///run/containerd/s/dec4afb44c13c41188eb5d5abfc19e1ce54193118e72cbe33d5069f757be925a" protocol=ttrpc version=3 Mar 20 21:33:05.797914 systemd[1]: Started cri-containerd-7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c.scope - libcontainer container 7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c. Mar 20 21:33:06.050381 containerd[1513]: time="2025-03-20T21:33:06.050246792Z" level=info msg="StartContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" returns successfully" Mar 20 21:33:06.577209 kubelet[2628]: E0320 21:33:06.577154 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:06.585370 kubelet[2628]: E0320 21:33:06.583700 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:06.589107 containerd[1513]: time="2025-03-20T21:33:06.588201004Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:33:06.590128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608-rootfs.mount: Deactivated successfully. Mar 20 21:33:06.608674 containerd[1513]: time="2025-03-20T21:33:06.605883169Z" level=info msg="Container e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:06.623589 containerd[1513]: time="2025-03-20T21:33:06.623522562Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\"" Mar 20 21:33:06.629918 containerd[1513]: time="2025-03-20T21:33:06.627681708Z" level=info msg="StartContainer for \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\"" Mar 20 21:33:06.632662 containerd[1513]: time="2025-03-20T21:33:06.630314494Z" level=info msg="connecting to shim e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" protocol=ttrpc version=3 Mar 20 21:33:06.639432 kubelet[2628]: I0320 21:33:06.639360 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bd8gj" podStartSLOduration=1.998897292 podStartE2EDuration="15.639338933s" podCreationTimestamp="2025-03-20 21:32:51 +0000 UTC" firstStartedPulling="2025-03-20 21:32:52.03027735 +0000 UTC m=+7.607952571" lastFinishedPulling="2025-03-20 21:33:05.670718991 +0000 UTC m=+21.248394212" observedRunningTime="2025-03-20 21:33:06.599603637 +0000 UTC m=+22.177278858" watchObservedRunningTime="2025-03-20 21:33:06.639338933 +0000 UTC m=+22.217014154" Mar 20 21:33:06.664783 systemd[1]: Started cri-containerd-e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698.scope - libcontainer container e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698. Mar 20 21:33:06.692197 systemd[1]: cri-containerd-e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698.scope: Deactivated successfully. Mar 20 21:33:06.693697 containerd[1513]: time="2025-03-20T21:33:06.693629843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" id:\"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" pid:3246 exited_at:{seconds:1742506386 nanos:693343611}" Mar 20 21:33:06.696909 containerd[1513]: time="2025-03-20T21:33:06.696875218Z" level=info msg="received exit event container_id:\"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" id:\"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" pid:3246 exited_at:{seconds:1742506386 nanos:693343611}" Mar 20 21:33:06.698548 containerd[1513]: time="2025-03-20T21:33:06.698504273Z" level=info msg="StartContainer for \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" returns successfully" Mar 20 21:33:06.719190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698-rootfs.mount: Deactivated successfully. Mar 20 21:33:07.591164 kubelet[2628]: E0320 21:33:07.589759 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:07.591164 kubelet[2628]: E0320 21:33:07.589905 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:07.592259 containerd[1513]: time="2025-03-20T21:33:07.592201407Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:33:07.675810 containerd[1513]: time="2025-03-20T21:33:07.675748286Z" level=info msg="Container 050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:07.682895 containerd[1513]: time="2025-03-20T21:33:07.682855907Z" level=info msg="CreateContainer within sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\"" Mar 20 21:33:07.683368 containerd[1513]: time="2025-03-20T21:33:07.683335565Z" level=info msg="StartContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\"" Mar 20 21:33:07.684180 containerd[1513]: time="2025-03-20T21:33:07.684145138Z" level=info msg="connecting to shim 050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6" address="unix:///run/containerd/s/c11b4602ef5776f1ccaf8c54f503a9ecc71d9a35164ed4c0a9f5a9cb95b772e9" protocol=ttrpc version=3 Mar 20 21:33:07.711869 systemd[1]: Started cri-containerd-050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6.scope - libcontainer container 050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6. Mar 20 21:33:07.749848 containerd[1513]: time="2025-03-20T21:33:07.749797129Z" level=info msg="StartContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" returns successfully" Mar 20 21:33:07.773199 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:37312.service - OpenSSH per-connection server daemon (10.0.0.1:37312). Mar 20 21:33:07.831341 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 37312 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:07.832996 sshd-session[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:07.839448 containerd[1513]: time="2025-03-20T21:33:07.838368357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" id:\"7663bf01ddc00180cf59d6756ef5c41972a8d934a805787c0e0e1caa32de64b2\" pid:3316 exited_at:{seconds:1742506387 nanos:837267834}" Mar 20 21:33:07.839684 systemd-logind[1496]: New session 8 of user core. Mar 20 21:33:07.844036 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:33:07.914197 kubelet[2628]: I0320 21:33:07.914110 2628 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 20 21:33:07.947057 systemd[1]: Created slice kubepods-burstable-pod66c1ed9b_46bb_4fe8_bc87_a69babebf828.slice - libcontainer container kubepods-burstable-pod66c1ed9b_46bb_4fe8_bc87_a69babebf828.slice. Mar 20 21:33:07.956627 systemd[1]: Created slice kubepods-burstable-pod71e93742_235d_450b_be68_add92f9516c8.slice - libcontainer container kubepods-burstable-pod71e93742_235d_450b_be68_add92f9516c8.slice. Mar 20 21:33:07.980206 sshd[3346]: Connection closed by 10.0.0.1 port 37312 Mar 20 21:33:07.980830 sshd-session[3310]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:07.988150 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:33:07.989178 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:37312.service: Deactivated successfully. Mar 20 21:33:07.991434 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:33:07.992475 systemd-logind[1496]: Removed session 8. Mar 20 21:33:08.034361 kubelet[2628]: I0320 21:33:08.034219 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf94s\" (UniqueName: \"kubernetes.io/projected/71e93742-235d-450b-be68-add92f9516c8-kube-api-access-gf94s\") pod \"coredns-668d6bf9bc-gqlnc\" (UID: \"71e93742-235d-450b-be68-add92f9516c8\") " pod="kube-system/coredns-668d6bf9bc-gqlnc" Mar 20 21:33:08.034361 kubelet[2628]: I0320 21:33:08.034268 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txdtc\" (UniqueName: \"kubernetes.io/projected/66c1ed9b-46bb-4fe8-bc87-a69babebf828-kube-api-access-txdtc\") pod \"coredns-668d6bf9bc-szpvd\" (UID: \"66c1ed9b-46bb-4fe8-bc87-a69babebf828\") " pod="kube-system/coredns-668d6bf9bc-szpvd" Mar 20 21:33:08.034361 kubelet[2628]: I0320 21:33:08.034289 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71e93742-235d-450b-be68-add92f9516c8-config-volume\") pod \"coredns-668d6bf9bc-gqlnc\" (UID: \"71e93742-235d-450b-be68-add92f9516c8\") " pod="kube-system/coredns-668d6bf9bc-gqlnc" Mar 20 21:33:08.034361 kubelet[2628]: I0320 21:33:08.034306 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66c1ed9b-46bb-4fe8-bc87-a69babebf828-config-volume\") pod \"coredns-668d6bf9bc-szpvd\" (UID: \"66c1ed9b-46bb-4fe8-bc87-a69babebf828\") " pod="kube-system/coredns-668d6bf9bc-szpvd" Mar 20 21:33:08.255087 kubelet[2628]: E0320 21:33:08.254965 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:08.256162 containerd[1513]: time="2025-03-20T21:33:08.256089013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpvd,Uid:66c1ed9b-46bb-4fe8-bc87-a69babebf828,Namespace:kube-system,Attempt:0,}" Mar 20 21:33:08.260574 kubelet[2628]: E0320 21:33:08.260547 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:08.261367 containerd[1513]: time="2025-03-20T21:33:08.261312984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqlnc,Uid:71e93742-235d-450b-be68-add92f9516c8,Namespace:kube-system,Attempt:0,}" Mar 20 21:33:08.596931 kubelet[2628]: E0320 21:33:08.596899 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:08.613181 kubelet[2628]: I0320 21:33:08.613096 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d5k7l" podStartSLOduration=6.135168578 podStartE2EDuration="17.613071663s" podCreationTimestamp="2025-03-20 21:32:51 +0000 UTC" firstStartedPulling="2025-03-20 21:32:51.913333948 +0000 UTC m=+7.491009169" lastFinishedPulling="2025-03-20 21:33:03.391237032 +0000 UTC m=+18.968912254" observedRunningTime="2025-03-20 21:33:08.612884318 +0000 UTC m=+24.190559539" watchObservedRunningTime="2025-03-20 21:33:08.613071663 +0000 UTC m=+24.190746884" Mar 20 21:33:09.600004 kubelet[2628]: E0320 21:33:09.599919 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:09.988893 systemd-networkd[1428]: cilium_host: Link UP Mar 20 21:33:09.989062 systemd-networkd[1428]: cilium_net: Link UP Mar 20 21:33:09.989262 systemd-networkd[1428]: cilium_net: Gained carrier Mar 20 21:33:09.989477 systemd-networkd[1428]: cilium_host: Gained carrier Mar 20 21:33:10.114902 systemd-networkd[1428]: cilium_vxlan: Link UP Mar 20 21:33:10.114931 systemd-networkd[1428]: cilium_vxlan: Gained carrier Mar 20 21:33:10.226821 systemd-networkd[1428]: cilium_net: Gained IPv6LL Mar 20 21:33:10.367709 kernel: NET: Registered PF_ALG protocol family Mar 20 21:33:10.602220 kubelet[2628]: E0320 21:33:10.602162 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:10.699790 systemd-networkd[1428]: cilium_host: Gained IPv6LL Mar 20 21:33:11.068171 systemd-networkd[1428]: lxc_health: Link UP Mar 20 21:33:11.087453 systemd-networkd[1428]: lxc_health: Gained carrier Mar 20 21:33:11.325794 kernel: eth0: renamed from tmp67ae7 Mar 20 21:33:11.338080 systemd-networkd[1428]: lxc550d37a64328: Link UP Mar 20 21:33:11.339470 systemd-networkd[1428]: lxc550d37a64328: Gained carrier Mar 20 21:33:11.340275 systemd-networkd[1428]: lxc90c0a45afde3: Link UP Mar 20 21:33:11.347735 kernel: eth0: renamed from tmp51824 Mar 20 21:33:11.356679 systemd-networkd[1428]: lxc90c0a45afde3: Gained carrier Mar 20 21:33:11.810994 kubelet[2628]: E0320 21:33:11.810949 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:11.980078 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Mar 20 21:33:12.554805 systemd-networkd[1428]: lxc550d37a64328: Gained IPv6LL Mar 20 21:33:12.605520 kubelet[2628]: E0320 21:33:12.605450 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:12.682800 systemd-networkd[1428]: lxc_health: Gained IPv6LL Mar 20 21:33:12.996995 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:41982.service - OpenSSH per-connection server daemon (10.0.0.1:41982). Mar 20 21:33:13.065154 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 41982 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:13.066836 sshd-session[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:13.067175 systemd-networkd[1428]: lxc90c0a45afde3: Gained IPv6LL Mar 20 21:33:13.073496 systemd-logind[1496]: New session 9 of user core. Mar 20 21:33:13.080776 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:33:13.200916 sshd[3807]: Connection closed by 10.0.0.1 port 41982 Mar 20 21:33:13.201301 sshd-session[3805]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:13.205860 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:41982.service: Deactivated successfully. Mar 20 21:33:13.207930 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:33:13.208813 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:33:13.209939 systemd-logind[1496]: Removed session 9. Mar 20 21:33:13.607316 kubelet[2628]: E0320 21:33:13.607279 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:14.798949 containerd[1513]: time="2025-03-20T21:33:14.798891821Z" level=info msg="connecting to shim 518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c" address="unix:///run/containerd/s/882d473e7545115d09974f18b5d8a42f079c3793d77e85e92589c7f026bedfd4" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:33:14.817468 containerd[1513]: time="2025-03-20T21:33:14.817419786Z" level=info msg="connecting to shim 67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282" address="unix:///run/containerd/s/8a07d871840c7d41d8088daaa8c45516056b28b336b8dea7718253a7738c748a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:33:14.824812 systemd[1]: Started cri-containerd-518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c.scope - libcontainer container 518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c. Mar 20 21:33:14.847759 systemd[1]: Started cri-containerd-67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282.scope - libcontainer container 67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282. Mar 20 21:33:14.852036 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:33:14.859306 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:33:14.885358 containerd[1513]: time="2025-03-20T21:33:14.885324293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpvd,Uid:66c1ed9b-46bb-4fe8-bc87-a69babebf828,Namespace:kube-system,Attempt:0,} returns sandbox id \"518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c\"" Mar 20 21:33:14.887626 kubelet[2628]: E0320 21:33:14.886369 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:14.890952 containerd[1513]: time="2025-03-20T21:33:14.890741213Z" level=info msg="CreateContainer within sandbox \"518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:33:14.891874 containerd[1513]: time="2025-03-20T21:33:14.891848402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqlnc,Uid:71e93742-235d-450b-be68-add92f9516c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282\"" Mar 20 21:33:14.893146 kubelet[2628]: E0320 21:33:14.893102 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:14.894676 containerd[1513]: time="2025-03-20T21:33:14.894616083Z" level=info msg="CreateContainer within sandbox \"67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:33:14.907287 containerd[1513]: time="2025-03-20T21:33:14.907015114Z" level=info msg="Container f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:14.910236 containerd[1513]: time="2025-03-20T21:33:14.910209291Z" level=info msg="Container 9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:33:14.915251 containerd[1513]: time="2025-03-20T21:33:14.915219483Z" level=info msg="CreateContainer within sandbox \"518245c7247e07c255a901a10678a6ae0f8f698a826e9697236cc8556ea7af7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d\"" Mar 20 21:33:14.915781 containerd[1513]: time="2025-03-20T21:33:14.915760784Z" level=info msg="StartContainer for \"f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d\"" Mar 20 21:33:14.917491 containerd[1513]: time="2025-03-20T21:33:14.917450772Z" level=info msg="connecting to shim f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d" address="unix:///run/containerd/s/882d473e7545115d09974f18b5d8a42f079c3793d77e85e92589c7f026bedfd4" protocol=ttrpc version=3 Mar 20 21:33:14.931604 containerd[1513]: time="2025-03-20T21:33:14.931561494Z" level=info msg="CreateContainer within sandbox \"67ae759b56b614a6f8a319b2ab70672c2f81ad6595dcf01cfb24303835d6b282\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00\"" Mar 20 21:33:14.933052 containerd[1513]: time="2025-03-20T21:33:14.932257436Z" level=info msg="StartContainer for \"9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00\"" Mar 20 21:33:14.934928 containerd[1513]: time="2025-03-20T21:33:14.934871207Z" level=info msg="connecting to shim 9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00" address="unix:///run/containerd/s/8a07d871840c7d41d8088daaa8c45516056b28b336b8dea7718253a7738c748a" protocol=ttrpc version=3 Mar 20 21:33:14.941860 systemd[1]: Started cri-containerd-f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d.scope - libcontainer container f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d. Mar 20 21:33:14.958768 systemd[1]: Started cri-containerd-9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00.scope - libcontainer container 9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00. Mar 20 21:33:15.012674 containerd[1513]: time="2025-03-20T21:33:15.012614780Z" level=info msg="StartContainer for \"9c9cdae0b200f315bdcbaf99009d564330a0d47ffa06e74e8950901926235f00\" returns successfully" Mar 20 21:33:15.012936 containerd[1513]: time="2025-03-20T21:33:15.012770623Z" level=info msg="StartContainer for \"f8818a8ac6f0a3e3a191262bcb9c73ee88cd958e7a50bb236c5af1a85c36df1d\" returns successfully" Mar 20 21:33:15.613053 kubelet[2628]: E0320 21:33:15.613000 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:15.620056 kubelet[2628]: E0320 21:33:15.619073 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:15.641186 kubelet[2628]: I0320 21:33:15.641080 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gqlnc" podStartSLOduration=24.64104535 podStartE2EDuration="24.64104535s" podCreationTimestamp="2025-03-20 21:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:33:15.626961829 +0000 UTC m=+31.204637050" watchObservedRunningTime="2025-03-20 21:33:15.64104535 +0000 UTC m=+31.218720571" Mar 20 21:33:15.641434 kubelet[2628]: I0320 21:33:15.641200 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-szpvd" podStartSLOduration=24.641193889 podStartE2EDuration="24.641193889s" podCreationTimestamp="2025-03-20 21:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:33:15.641168822 +0000 UTC m=+31.218844053" watchObservedRunningTime="2025-03-20 21:33:15.641193889 +0000 UTC m=+31.218869120" Mar 20 21:33:15.794251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815821265.mount: Deactivated successfully. Mar 20 21:33:16.621397 kubelet[2628]: E0320 21:33:16.621339 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:16.621933 kubelet[2628]: E0320 21:33:16.621340 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:17.627559 kubelet[2628]: E0320 21:33:17.625394 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:17.627559 kubelet[2628]: E0320 21:33:17.626007 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:33:18.216381 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:41994.service - OpenSSH per-connection server daemon (10.0.0.1:41994). Mar 20 21:33:18.269627 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 41994 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:18.271137 sshd-session[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:18.275743 systemd-logind[1496]: New session 10 of user core. Mar 20 21:33:18.295883 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:33:18.413188 sshd[4002]: Connection closed by 10.0.0.1 port 41994 Mar 20 21:33:18.413536 sshd-session[4000]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:18.417560 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:41994.service: Deactivated successfully. Mar 20 21:33:18.419735 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:33:18.420450 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:33:18.421520 systemd-logind[1496]: Removed session 10. Mar 20 21:33:23.425850 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:47408.service - OpenSSH per-connection server daemon (10.0.0.1:47408). Mar 20 21:33:23.468372 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:23.469913 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:23.474373 systemd-logind[1496]: New session 11 of user core. Mar 20 21:33:23.482780 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:33:23.599387 sshd[4021]: Connection closed by 10.0.0.1 port 47408 Mar 20 21:33:23.599750 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:23.603579 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:47408.service: Deactivated successfully. Mar 20 21:33:23.605756 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:33:23.606490 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:33:23.607404 systemd-logind[1496]: Removed session 11. Mar 20 21:33:28.614731 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:47418.service - OpenSSH per-connection server daemon (10.0.0.1:47418). Mar 20 21:33:28.667525 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 47418 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:28.668983 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:28.672916 systemd-logind[1496]: New session 12 of user core. Mar 20 21:33:28.683756 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:33:28.787993 sshd[4038]: Connection closed by 10.0.0.1 port 47418 Mar 20 21:33:28.788336 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:28.800427 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:47418.service: Deactivated successfully. Mar 20 21:33:28.802728 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:33:28.804711 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:33:28.806392 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:47434.service - OpenSSH per-connection server daemon (10.0.0.1:47434). Mar 20 21:33:28.807304 systemd-logind[1496]: Removed session 12. Mar 20 21:33:28.858800 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 47434 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:28.860265 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:28.864625 systemd-logind[1496]: New session 13 of user core. Mar 20 21:33:28.874799 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:33:29.019687 sshd[4054]: Connection closed by 10.0.0.1 port 47434 Mar 20 21:33:29.020228 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:29.033409 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:47434.service: Deactivated successfully. Mar 20 21:33:29.037509 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:33:29.042363 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:33:29.046420 systemd-logind[1496]: Removed session 13. Mar 20 21:33:29.051271 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:47438.service - OpenSSH per-connection server daemon (10.0.0.1:47438). Mar 20 21:33:29.105543 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 47438 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:29.106926 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:29.111176 systemd-logind[1496]: New session 14 of user core. Mar 20 21:33:29.121756 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:33:29.239893 sshd[4067]: Connection closed by 10.0.0.1 port 47438 Mar 20 21:33:29.240104 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:29.244851 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:47438.service: Deactivated successfully. Mar 20 21:33:29.247374 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:33:29.248163 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:33:29.249351 systemd-logind[1496]: Removed session 14. Mar 20 21:33:34.259770 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:58280.service - OpenSSH per-connection server daemon (10.0.0.1:58280). Mar 20 21:33:34.313432 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 58280 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:34.314963 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:34.319172 systemd-logind[1496]: New session 15 of user core. Mar 20 21:33:34.327761 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:33:34.446151 sshd[4083]: Connection closed by 10.0.0.1 port 58280 Mar 20 21:33:34.446418 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:34.451008 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:58280.service: Deactivated successfully. Mar 20 21:33:34.453274 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:33:34.454072 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:33:34.454923 systemd-logind[1496]: Removed session 15. Mar 20 21:33:39.458684 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Mar 20 21:33:39.508415 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:39.510037 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:39.514264 systemd-logind[1496]: New session 16 of user core. Mar 20 21:33:39.520770 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:33:39.626671 sshd[4099]: Connection closed by 10.0.0.1 port 58294 Mar 20 21:33:39.627031 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:39.640476 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:58294.service: Deactivated successfully. Mar 20 21:33:39.642662 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:33:39.644309 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:33:39.646197 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Mar 20 21:33:39.647169 systemd-logind[1496]: Removed session 16. Mar 20 21:33:39.696385 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:39.697960 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:39.702377 systemd-logind[1496]: New session 17 of user core. Mar 20 21:33:39.713778 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:33:39.897772 sshd[4115]: Connection closed by 10.0.0.1 port 58302 Mar 20 21:33:39.898314 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:39.909464 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:58302.service: Deactivated successfully. Mar 20 21:33:39.911337 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:33:39.912868 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:33:39.914305 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:58306.service - OpenSSH per-connection server daemon (10.0.0.1:58306). Mar 20 21:33:39.915221 systemd-logind[1496]: Removed session 17. Mar 20 21:33:39.973013 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:39.974787 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:39.980569 systemd-logind[1496]: New session 18 of user core. Mar 20 21:33:39.993861 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:33:40.721445 sshd[4130]: Connection closed by 10.0.0.1 port 58306 Mar 20 21:33:40.721877 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:40.734540 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:58306.service: Deactivated successfully. Mar 20 21:33:40.739282 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:33:40.741580 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:33:40.745263 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:58312.service - OpenSSH per-connection server daemon (10.0.0.1:58312). Mar 20 21:33:40.747945 systemd-logind[1496]: Removed session 18. Mar 20 21:33:40.798563 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 58312 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:40.800423 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:40.805399 systemd-logind[1496]: New session 19 of user core. Mar 20 21:33:40.813782 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:33:41.056862 sshd[4151]: Connection closed by 10.0.0.1 port 58312 Mar 20 21:33:41.057465 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:41.067032 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:58312.service: Deactivated successfully. Mar 20 21:33:41.069289 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:33:41.071098 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:33:41.072428 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:38110.service - OpenSSH per-connection server daemon (10.0.0.1:38110). Mar 20 21:33:41.073207 systemd-logind[1496]: Removed session 19. Mar 20 21:33:41.122548 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 38110 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:41.124376 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:41.128797 systemd-logind[1496]: New session 20 of user core. Mar 20 21:33:41.142764 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:33:41.251794 sshd[4164]: Connection closed by 10.0.0.1 port 38110 Mar 20 21:33:41.252163 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:41.256380 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:38110.service: Deactivated successfully. Mar 20 21:33:41.258529 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:33:41.259327 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:33:41.260254 systemd-logind[1496]: Removed session 20. Mar 20 21:33:46.265854 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:38118.service - OpenSSH per-connection server daemon (10.0.0.1:38118). Mar 20 21:33:46.318996 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 38118 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:46.320651 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:46.324749 systemd-logind[1496]: New session 21 of user core. Mar 20 21:33:46.338749 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:33:46.445329 sshd[4183]: Connection closed by 10.0.0.1 port 38118 Mar 20 21:33:46.445731 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:46.450347 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:38118.service: Deactivated successfully. Mar 20 21:33:46.452471 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:33:46.453290 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:33:46.454203 systemd-logind[1496]: Removed session 21. Mar 20 21:33:51.458619 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:46822.service - OpenSSH per-connection server daemon (10.0.0.1:46822). Mar 20 21:33:51.508542 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 46822 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:51.509866 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:51.513798 systemd-logind[1496]: New session 22 of user core. Mar 20 21:33:51.519800 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:33:51.623855 sshd[4198]: Connection closed by 10.0.0.1 port 46822 Mar 20 21:33:51.624172 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:51.627956 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:46822.service: Deactivated successfully. Mar 20 21:33:51.629911 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:33:51.630536 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:33:51.631322 systemd-logind[1496]: Removed session 22. Mar 20 21:33:56.640810 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:46830.service - OpenSSH per-connection server daemon (10.0.0.1:46830). Mar 20 21:33:56.691700 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 46830 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:33:56.692994 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:33:56.697103 systemd-logind[1496]: New session 23 of user core. Mar 20 21:33:56.708747 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:33:56.816696 sshd[4215]: Connection closed by 10.0.0.1 port 46830 Mar 20 21:33:56.816992 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Mar 20 21:33:56.821248 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:46830.service: Deactivated successfully. Mar 20 21:33:56.823572 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:33:56.824258 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:33:56.825193 systemd-logind[1496]: Removed session 23. Mar 20 21:34:01.501220 kubelet[2628]: E0320 21:34:01.501169 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:01.831048 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:52216.service - OpenSSH per-connection server daemon (10.0.0.1:52216). Mar 20 21:34:01.882694 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:34:01.884030 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:34:01.887950 systemd-logind[1496]: New session 24 of user core. Mar 20 21:34:01.901762 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 21:34:02.007650 sshd[4230]: Connection closed by 10.0.0.1 port 52216 Mar 20 21:34:02.008039 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 20 21:34:02.019651 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:52216.service: Deactivated successfully. Mar 20 21:34:02.021886 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 21:34:02.023494 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Mar 20 21:34:02.024963 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:52218.service - OpenSSH per-connection server daemon (10.0.0.1:52218). Mar 20 21:34:02.025783 systemd-logind[1496]: Removed session 24. Mar 20 21:34:02.074576 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 52218 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:34:02.075972 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:34:02.080480 systemd-logind[1496]: New session 25 of user core. Mar 20 21:34:02.089769 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 21:34:03.416265 containerd[1513]: time="2025-03-20T21:34:03.415899193Z" level=info msg="StopContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" with timeout 30 (s)" Mar 20 21:34:03.416804 containerd[1513]: time="2025-03-20T21:34:03.416451834Z" level=info msg="Stop container \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" with signal terminated" Mar 20 21:34:03.431497 systemd[1]: cri-containerd-7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c.scope: Deactivated successfully. Mar 20 21:34:03.436511 containerd[1513]: time="2025-03-20T21:34:03.433567752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" id:\"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" pid:3212 exited_at:{seconds:1742506443 nanos:433066780}" Mar 20 21:34:03.436511 containerd[1513]: time="2025-03-20T21:34:03.433701379Z" level=info msg="received exit event container_id:\"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" id:\"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" pid:3212 exited_at:{seconds:1742506443 nanos:433066780}" Mar 20 21:34:03.451440 containerd[1513]: time="2025-03-20T21:34:03.451392642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" id:\"55b74778386ac4bb79fc97af535cdf6d6b35b06a0d444851064a6d2cdfd24301\" pid:4273 exited_at:{seconds:1742506443 nanos:451109268}" Mar 20 21:34:03.463568 containerd[1513]: time="2025-03-20T21:34:03.463523504Z" level=info msg="StopContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" with timeout 2 (s)" Mar 20 21:34:03.464058 containerd[1513]: time="2025-03-20T21:34:03.464030327Z" level=info msg="Stop container \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" with signal terminated" Mar 20 21:34:03.470731 containerd[1513]: time="2025-03-20T21:34:03.470346350Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:34:03.476021 systemd-networkd[1428]: lxc_health: Link DOWN Mar 20 21:34:03.476030 systemd-networkd[1428]: lxc_health: Lost carrier Mar 20 21:34:03.476393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c-rootfs.mount: Deactivated successfully. Mar 20 21:34:03.488461 containerd[1513]: time="2025-03-20T21:34:03.488319645Z" level=info msg="StopContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" returns successfully" Mar 20 21:34:03.488964 containerd[1513]: time="2025-03-20T21:34:03.488945327Z" level=info msg="StopPodSandbox for \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\"" Mar 20 21:34:03.494731 systemd[1]: cri-containerd-050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6.scope: Deactivated successfully. Mar 20 21:34:03.495242 systemd[1]: cri-containerd-050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6.scope: Consumed 7.076s CPU time, 125.8M memory peak, 328K read from disk, 13.3M written to disk. Mar 20 21:34:03.495934 containerd[1513]: time="2025-03-20T21:34:03.495899886Z" level=info msg="Container to stop \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.496204 containerd[1513]: time="2025-03-20T21:34:03.496172680Z" level=info msg="received exit event container_id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" pid:3283 exited_at:{seconds:1742506443 nanos:495954911}" Mar 20 21:34:03.496435 containerd[1513]: time="2025-03-20T21:34:03.496386581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" id:\"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" pid:3283 exited_at:{seconds:1742506443 nanos:495954911}" Mar 20 21:34:03.520113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6-rootfs.mount: Deactivated successfully. Mar 20 21:34:03.521100 systemd[1]: cri-containerd-738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d.scope: Deactivated successfully. Mar 20 21:34:03.521896 containerd[1513]: time="2025-03-20T21:34:03.521707750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" id:\"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" pid:2853 exit_status:137 exited_at:{seconds:1742506443 nanos:520335694}" Mar 20 21:34:03.531614 containerd[1513]: time="2025-03-20T21:34:03.531576617Z" level=info msg="StopContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" returns successfully" Mar 20 21:34:03.532042 containerd[1513]: time="2025-03-20T21:34:03.532020801Z" level=info msg="StopPodSandbox for \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\"" Mar 20 21:34:03.532082 containerd[1513]: time="2025-03-20T21:34:03.532072530Z" level=info msg="Container to stop \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.532108 containerd[1513]: time="2025-03-20T21:34:03.532082289Z" level=info msg="Container to stop \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.532108 containerd[1513]: time="2025-03-20T21:34:03.532090154Z" level=info msg="Container to stop \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.532108 containerd[1513]: time="2025-03-20T21:34:03.532098038Z" level=info msg="Container to stop \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.532108 containerd[1513]: time="2025-03-20T21:34:03.532105664Z" level=info msg="Container to stop \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:34:03.538796 systemd[1]: cri-containerd-47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf.scope: Deactivated successfully. Mar 20 21:34:03.549415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d-rootfs.mount: Deactivated successfully. Mar 20 21:34:03.552634 containerd[1513]: time="2025-03-20T21:34:03.552603457Z" level=info msg="shim disconnected" id=738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d namespace=k8s.io Mar 20 21:34:03.552907 containerd[1513]: time="2025-03-20T21:34:03.552821476Z" level=warning msg="cleaning up after shim disconnected" id=738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d namespace=k8s.io Mar 20 21:34:03.558357 containerd[1513]: time="2025-03-20T21:34:03.552836946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:34:03.560493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf-rootfs.mount: Deactivated successfully. Mar 20 21:34:03.565856 containerd[1513]: time="2025-03-20T21:34:03.565708882Z" level=info msg="shim disconnected" id=47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf namespace=k8s.io Mar 20 21:34:03.565856 containerd[1513]: time="2025-03-20T21:34:03.565760771Z" level=warning msg="cleaning up after shim disconnected" id=47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf namespace=k8s.io Mar 20 21:34:03.565856 containerd[1513]: time="2025-03-20T21:34:03.565772835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:34:03.578659 containerd[1513]: time="2025-03-20T21:34:03.578600025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" id:\"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" pid:2785 exit_status:137 exited_at:{seconds:1742506443 nanos:539789494}" Mar 20 21:34:03.581087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d-shm.mount: Deactivated successfully. Mar 20 21:34:03.590737 containerd[1513]: time="2025-03-20T21:34:03.590657145Z" level=info msg="TearDown network for sandbox \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" successfully" Mar 20 21:34:03.590737 containerd[1513]: time="2025-03-20T21:34:03.590725315Z" level=info msg="StopPodSandbox for \"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" returns successfully" Mar 20 21:34:03.592908 containerd[1513]: time="2025-03-20T21:34:03.592880175Z" level=info msg="TearDown network for sandbox \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" successfully" Mar 20 21:34:03.593060 containerd[1513]: time="2025-03-20T21:34:03.592988653Z" level=info msg="StopPodSandbox for \"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" returns successfully" Mar 20 21:34:03.599015 containerd[1513]: time="2025-03-20T21:34:03.598973781Z" level=info msg="received exit event sandbox_id:\"47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf\" exit_status:137 exited_at:{seconds:1742506443 nanos:539789494}" Mar 20 21:34:03.599319 containerd[1513]: time="2025-03-20T21:34:03.599197911Z" level=info msg="received exit event sandbox_id:\"738d5bf8898dcc68a6dc409865b50c57a710292440bc0bcc679848f32347957d\" exit_status:137 exited_at:{seconds:1742506443 nanos:520335694}" Mar 20 21:34:03.655632 kubelet[2628]: I0320 21:34:03.655578 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hostproc\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.655632 kubelet[2628]: I0320 21:34:03.655620 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-cgroup\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655674 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-net\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655689 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-bpf-maps\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655704 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-run\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655723 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-config-path\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655743 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-kernel\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656149 kubelet[2628]: I0320 21:34:03.655742 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656299 kubelet[2628]: I0320 21:34:03.655783 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvl55\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-kube-api-access-fvl55\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656299 kubelet[2628]: I0320 21:34:03.655802 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656299 kubelet[2628]: I0320 21:34:03.655806 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-etc-cni-netd\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656299 kubelet[2628]: I0320 21:34:03.655779 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656299 kubelet[2628]: I0320 21:34:03.655829 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656444 kubelet[2628]: I0320 21:34:03.655818 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656444 kubelet[2628]: I0320 21:34:03.655851 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.656444 kubelet[2628]: I0320 21:34:03.655823 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nks5\" (UniqueName: \"kubernetes.io/projected/02d67544-dbf2-4617-9c8c-6762cdf49abc-kube-api-access-9nks5\") pod \"02d67544-dbf2-4617-9c8c-6762cdf49abc\" (UID: \"02d67544-dbf2-4617-9c8c-6762cdf49abc\") " Mar 20 21:34:03.656444 kubelet[2628]: I0320 21:34:03.655916 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d67544-dbf2-4617-9c8c-6762cdf49abc-cilium-config-path\") pod \"02d67544-dbf2-4617-9c8c-6762cdf49abc\" (UID: \"02d67544-dbf2-4617-9c8c-6762cdf49abc\") " Mar 20 21:34:03.656444 kubelet[2628]: I0320 21:34:03.655944 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-lib-modules\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.655972 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hubble-tls\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.655995 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-clustermesh-secrets\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.656014 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-xtables-lock\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.656030 2628 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cni-path\") pod \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\" (UID: \"69fb0054-3d30-4a8c-9c34-ae18758ba7e7\") " Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.656096 2628 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.656108 2628 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.656680 kubelet[2628]: I0320 21:34:03.656120 2628 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.658843 kubelet[2628]: I0320 21:34:03.656134 2628 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.658843 kubelet[2628]: I0320 21:34:03.656147 2628 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.658843 kubelet[2628]: I0320 21:34:03.656158 2628 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.658843 kubelet[2628]: I0320 21:34:03.656186 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.658843 kubelet[2628]: I0320 21:34:03.657045 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.659671 kubelet[2628]: I0320 21:34:03.659651 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 21:34:03.660107 kubelet[2628]: I0320 21:34:03.660071 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d67544-dbf2-4617-9c8c-6762cdf49abc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02d67544-dbf2-4617-9c8c-6762cdf49abc" (UID: "02d67544-dbf2-4617-9c8c-6762cdf49abc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 21:34:03.660300 kubelet[2628]: I0320 21:34:03.660173 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.660360 kubelet[2628]: I0320 21:34:03.660194 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 21:34:03.660647 kubelet[2628]: I0320 21:34:03.660609 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d67544-dbf2-4617-9c8c-6762cdf49abc-kube-api-access-9nks5" (OuterVolumeSpecName: "kube-api-access-9nks5") pod "02d67544-dbf2-4617-9c8c-6762cdf49abc" (UID: "02d67544-dbf2-4617-9c8c-6762cdf49abc"). InnerVolumeSpecName "kube-api-access-9nks5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 21:34:03.660765 kubelet[2628]: I0320 21:34:03.660686 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-kube-api-access-fvl55" (OuterVolumeSpecName: "kube-api-access-fvl55") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "kube-api-access-fvl55". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 21:34:03.661192 kubelet[2628]: I0320 21:34:03.661165 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 21:34:03.662154 kubelet[2628]: I0320 21:34:03.662125 2628 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69fb0054-3d30-4a8c-9c34-ae18758ba7e7" (UID: "69fb0054-3d30-4a8c-9c34-ae18758ba7e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 21:34:03.710372 kubelet[2628]: I0320 21:34:03.710078 2628 scope.go:117] "RemoveContainer" containerID="7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c" Mar 20 21:34:03.713842 containerd[1513]: time="2025-03-20T21:34:03.713801752Z" level=info msg="RemoveContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\"" Mar 20 21:34:03.715963 systemd[1]: Removed slice kubepods-besteffort-pod02d67544_dbf2_4617_9c8c_6762cdf49abc.slice - libcontainer container kubepods-besteffort-pod02d67544_dbf2_4617_9c8c_6762cdf49abc.slice. Mar 20 21:34:03.721938 containerd[1513]: time="2025-03-20T21:34:03.721903736Z" level=info msg="RemoveContainer for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" returns successfully" Mar 20 21:34:03.722152 kubelet[2628]: I0320 21:34:03.722118 2628 scope.go:117] "RemoveContainer" containerID="7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c" Mar 20 21:34:03.722361 containerd[1513]: time="2025-03-20T21:34:03.722326969Z" level=error msg="ContainerStatus for \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\": not found" Mar 20 21:34:03.723120 systemd[1]: Removed slice kubepods-burstable-pod69fb0054_3d30_4a8c_9c34_ae18758ba7e7.slice - libcontainer container kubepods-burstable-pod69fb0054_3d30_4a8c_9c34_ae18758ba7e7.slice. Mar 20 21:34:03.723230 systemd[1]: kubepods-burstable-pod69fb0054_3d30_4a8c_9c34_ae18758ba7e7.slice: Consumed 7.192s CPU time, 126.1M memory peak, 344K read from disk, 13.3M written to disk. Mar 20 21:34:03.724972 kubelet[2628]: E0320 21:34:03.724941 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\": not found" containerID="7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c" Mar 20 21:34:03.725061 kubelet[2628]: I0320 21:34:03.724978 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c"} err="failed to get container status \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7303f7f269a8a51791260920e3176a653e9bd77bad80bd63ac6565bc8a9fdb6c\": not found" Mar 20 21:34:03.725092 kubelet[2628]: I0320 21:34:03.725064 2628 scope.go:117] "RemoveContainer" containerID="050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6" Mar 20 21:34:03.726980 containerd[1513]: time="2025-03-20T21:34:03.726923939Z" level=info msg="RemoveContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\"" Mar 20 21:34:03.732318 containerd[1513]: time="2025-03-20T21:34:03.732277443Z" level=info msg="RemoveContainer for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" returns successfully" Mar 20 21:34:03.732770 kubelet[2628]: I0320 21:34:03.732651 2628 scope.go:117] "RemoveContainer" containerID="e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698" Mar 20 21:34:03.736309 containerd[1513]: time="2025-03-20T21:34:03.736261797Z" level=info msg="RemoveContainer for \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\"" Mar 20 21:34:03.755820 containerd[1513]: time="2025-03-20T21:34:03.755660550Z" level=info msg="RemoveContainer for \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" returns successfully" Mar 20 21:34:03.756025 kubelet[2628]: I0320 21:34:03.755997 2628 scope.go:117] "RemoveContainer" containerID="986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756317 2628 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756334 2628 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756343 2628 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756353 2628 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756361 2628 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvl55\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-kube-api-access-fvl55\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756369 2628 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756377 2628 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nks5\" (UniqueName: \"kubernetes.io/projected/02d67544-dbf2-4617-9c8c-6762cdf49abc-kube-api-access-9nks5\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756415 kubelet[2628]: I0320 21:34:03.756385 2628 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d67544-dbf2-4617-9c8c-6762cdf49abc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756616 kubelet[2628]: I0320 21:34:03.756392 2628 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.756616 kubelet[2628]: I0320 21:34:03.756400 2628 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69fb0054-3d30-4a8c-9c34-ae18758ba7e7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 21:34:03.763761 containerd[1513]: time="2025-03-20T21:34:03.763542390Z" level=info msg="RemoveContainer for \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\"" Mar 20 21:34:03.768425 containerd[1513]: time="2025-03-20T21:34:03.768383540Z" level=info msg="RemoveContainer for \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" returns successfully" Mar 20 21:34:03.768598 kubelet[2628]: I0320 21:34:03.768574 2628 scope.go:117] "RemoveContainer" containerID="33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d" Mar 20 21:34:03.769797 containerd[1513]: time="2025-03-20T21:34:03.769775954Z" level=info msg="RemoveContainer for \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\"" Mar 20 21:34:03.773435 containerd[1513]: time="2025-03-20T21:34:03.773412830Z" level=info msg="RemoveContainer for \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" returns successfully" Mar 20 21:34:03.773596 kubelet[2628]: I0320 21:34:03.773572 2628 scope.go:117] "RemoveContainer" containerID="eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550" Mar 20 21:34:03.774965 containerd[1513]: time="2025-03-20T21:34:03.774907902Z" level=info msg="RemoveContainer for \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\"" Mar 20 21:34:03.778467 containerd[1513]: time="2025-03-20T21:34:03.778442702Z" level=info msg="RemoveContainer for \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" returns successfully" Mar 20 21:34:03.778678 kubelet[2628]: I0320 21:34:03.778661 2628 scope.go:117] "RemoveContainer" containerID="050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6" Mar 20 21:34:03.778836 containerd[1513]: time="2025-03-20T21:34:03.778804027Z" level=error msg="ContainerStatus for \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\": not found" Mar 20 21:34:03.778927 kubelet[2628]: E0320 21:34:03.778903 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\": not found" containerID="050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6" Mar 20 21:34:03.778960 kubelet[2628]: I0320 21:34:03.778933 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6"} err="failed to get container status \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"050210c5daf07f7250edf823d3d001d43b3b7cb9243f20dbfb2f72da2079f5b6\": not found" Mar 20 21:34:03.778960 kubelet[2628]: I0320 21:34:03.778954 2628 scope.go:117] "RemoveContainer" containerID="e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698" Mar 20 21:34:03.779122 containerd[1513]: time="2025-03-20T21:34:03.779068905Z" level=error msg="ContainerStatus for \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\": not found" Mar 20 21:34:03.779308 kubelet[2628]: E0320 21:34:03.779233 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\": not found" containerID="e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698" Mar 20 21:34:03.779308 kubelet[2628]: I0320 21:34:03.779254 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698"} err="failed to get container status \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3c1ca081cb79c90582210d83a7c8a7117756ef8d19268a3f31b2c99f016a698\": not found" Mar 20 21:34:03.779308 kubelet[2628]: I0320 21:34:03.779270 2628 scope.go:117] "RemoveContainer" containerID="986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608" Mar 20 21:34:03.779739 containerd[1513]: time="2025-03-20T21:34:03.779674058Z" level=error msg="ContainerStatus for \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\": not found" Mar 20 21:34:03.779915 kubelet[2628]: E0320 21:34:03.779873 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\": not found" containerID="986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608" Mar 20 21:34:03.779971 kubelet[2628]: I0320 21:34:03.779917 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608"} err="failed to get container status \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\": rpc error: code = NotFound desc = an error occurred when try to find container \"986c0a2e52f8a8533e47bebeafae145e0e175fcb5ab4c28f1872679546c74608\": not found" Mar 20 21:34:03.779971 kubelet[2628]: I0320 21:34:03.779950 2628 scope.go:117] "RemoveContainer" containerID="33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d" Mar 20 21:34:03.780324 containerd[1513]: time="2025-03-20T21:34:03.780249292Z" level=error msg="ContainerStatus for \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\": not found" Mar 20 21:34:03.780457 kubelet[2628]: E0320 21:34:03.780431 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\": not found" containerID="33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d" Mar 20 21:34:03.780503 kubelet[2628]: I0320 21:34:03.780457 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d"} err="failed to get container status \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\": rpc error: code = NotFound desc = an error occurred when try to find container \"33f004f2a93b6049af672f8b3f07cd9f849013632253c3c9eaff3a2d1b68980d\": not found" Mar 20 21:34:03.780535 kubelet[2628]: I0320 21:34:03.780502 2628 scope.go:117] "RemoveContainer" containerID="eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550" Mar 20 21:34:03.780823 containerd[1513]: time="2025-03-20T21:34:03.780770213Z" level=error msg="ContainerStatus for \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\": not found" Mar 20 21:34:03.780876 kubelet[2628]: E0320 21:34:03.780857 2628 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\": not found" containerID="eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550" Mar 20 21:34:03.780914 kubelet[2628]: I0320 21:34:03.780879 2628 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550"} err="failed to get container status \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb4a153b4d585faa848925d802366d7817cc22f1ec7d27d8e56efa166f8a4550\": not found" Mar 20 21:34:04.474893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47059a8c7f967a351923212d336fdb3ec5ce5846953b092c060943393fa5a5bf-shm.mount: Deactivated successfully. Mar 20 21:34:04.475005 systemd[1]: var-lib-kubelet-pods-02d67544\x2ddbf2\x2d4617\x2d9c8c\x2d6762cdf49abc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nks5.mount: Deactivated successfully. Mar 20 21:34:04.475085 systemd[1]: var-lib-kubelet-pods-69fb0054\x2d3d30\x2d4a8c\x2d9c34\x2dae18758ba7e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 21:34:04.475171 systemd[1]: var-lib-kubelet-pods-69fb0054\x2d3d30\x2d4a8c\x2d9c34\x2dae18758ba7e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvl55.mount: Deactivated successfully. Mar 20 21:34:04.475264 systemd[1]: var-lib-kubelet-pods-69fb0054\x2d3d30\x2d4a8c\x2d9c34\x2dae18758ba7e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 21:34:04.502175 kubelet[2628]: I0320 21:34:04.502126 2628 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d67544-dbf2-4617-9c8c-6762cdf49abc" path="/var/lib/kubelet/pods/02d67544-dbf2-4617-9c8c-6762cdf49abc/volumes" Mar 20 21:34:04.502772 kubelet[2628]: I0320 21:34:04.502738 2628 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fb0054-3d30-4a8c-9c34-ae18758ba7e7" path="/var/lib/kubelet/pods/69fb0054-3d30-4a8c-9c34-ae18758ba7e7/volumes" Mar 20 21:34:04.568209 kubelet[2628]: E0320 21:34:04.568155 2628 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:34:04.803448 containerd[1513]: time="2025-03-20T21:34:04.803305899Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1742506443 nanos:520335694}" Mar 20 21:34:05.385589 sshd[4245]: Connection closed by 10.0.0.1 port 52218 Mar 20 21:34:05.386111 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Mar 20 21:34:05.401490 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:52218.service: Deactivated successfully. Mar 20 21:34:05.403660 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 21:34:05.405374 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Mar 20 21:34:05.406715 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:52226.service - OpenSSH per-connection server daemon (10.0.0.1:52226). Mar 20 21:34:05.407588 systemd-logind[1496]: Removed session 25. Mar 20 21:34:05.459432 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 52226 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:34:05.460873 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:34:05.465253 systemd-logind[1496]: New session 26 of user core. Mar 20 21:34:05.474772 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 21:34:06.011683 sshd[4400]: Connection closed by 10.0.0.1 port 52226 Mar 20 21:34:06.011692 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Mar 20 21:34:06.019930 kubelet[2628]: I0320 21:34:06.019880 2628 memory_manager.go:355] "RemoveStaleState removing state" podUID="69fb0054-3d30-4a8c-9c34-ae18758ba7e7" containerName="cilium-agent" Mar 20 21:34:06.019930 kubelet[2628]: I0320 21:34:06.019914 2628 memory_manager.go:355] "RemoveStaleState removing state" podUID="02d67544-dbf2-4617-9c8c-6762cdf49abc" containerName="cilium-operator" Mar 20 21:34:06.029663 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:52226.service: Deactivated successfully. Mar 20 21:34:06.034507 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 21:34:06.038618 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Mar 20 21:34:06.045983 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:52242.service - OpenSSH per-connection server daemon (10.0.0.1:52242). Mar 20 21:34:06.047414 systemd-logind[1496]: Removed session 26. Mar 20 21:34:06.057016 systemd[1]: Created slice kubepods-burstable-pode643c136_d256_4789_8dd6_aeac20459071.slice - libcontainer container kubepods-burstable-pode643c136_d256_4789_8dd6_aeac20459071.slice. Mar 20 21:34:06.070143 kubelet[2628]: I0320 21:34:06.070109 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e643c136-d256-4789-8dd6-aeac20459071-cilium-config-path\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070310 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-cni-path\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070338 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-host-proc-sys-net\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070356 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-bpf-maps\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070372 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-cilium-cgroup\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070388 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e643c136-d256-4789-8dd6-aeac20459071-cilium-ipsec-secrets\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070600 kubelet[2628]: I0320 21:34:06.070406 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-cilium-run\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070423 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-hostproc\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070438 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-xtables-lock\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070451 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e643c136-d256-4789-8dd6-aeac20459071-clustermesh-secrets\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070465 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e643c136-d256-4789-8dd6-aeac20459071-hubble-tls\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070481 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-etc-cni-netd\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070771 kubelet[2628]: I0320 21:34:06.070493 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-lib-modules\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070975 kubelet[2628]: I0320 21:34:06.070506 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e643c136-d256-4789-8dd6-aeac20459071-host-proc-sys-kernel\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.070975 kubelet[2628]: I0320 21:34:06.070520 2628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bz4\" (UniqueName: \"kubernetes.io/projected/e643c136-d256-4789-8dd6-aeac20459071-kube-api-access-v4bz4\") pod \"cilium-5kb9f\" (UID: \"e643c136-d256-4789-8dd6-aeac20459071\") " pod="kube-system/cilium-5kb9f" Mar 20 21:34:06.096699 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 52242 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:34:06.098088 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:34:06.102035 systemd-logind[1496]: New session 27 of user core. Mar 20 21:34:06.112769 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 21:34:06.162389 sshd[4414]: Connection closed by 10.0.0.1 port 52242 Mar 20 21:34:06.162692 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Mar 20 21:34:06.173961 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:52242.service: Deactivated successfully. Mar 20 21:34:06.188724 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 21:34:06.190246 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Mar 20 21:34:06.191556 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:52250.service - OpenSSH per-connection server daemon (10.0.0.1:52250). Mar 20 21:34:06.192547 systemd-logind[1496]: Removed session 27. Mar 20 21:34:06.193182 kubelet[2628]: I0320 21:34:06.193145 2628 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T21:34:06Z","lastTransitionTime":"2025-03-20T21:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 21:34:06.242897 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 52250 ssh2: RSA SHA256:VTq3PGBWdFOdqOE94J+KuRtq48vMTKbY2+SdwJo+5wc Mar 20 21:34:06.244371 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:34:06.248708 systemd-logind[1496]: New session 28 of user core. Mar 20 21:34:06.253773 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 20 21:34:06.361142 kubelet[2628]: E0320 21:34:06.361108 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:06.361685 containerd[1513]: time="2025-03-20T21:34:06.361614778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kb9f,Uid:e643c136-d256-4789-8dd6-aeac20459071,Namespace:kube-system,Attempt:0,}" Mar 20 21:34:06.381039 containerd[1513]: time="2025-03-20T21:34:06.380992763Z" level=info msg="connecting to shim 50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:34:06.403964 systemd[1]: Started cri-containerd-50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb.scope - libcontainer container 50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb. Mar 20 21:34:06.428821 containerd[1513]: time="2025-03-20T21:34:06.428777974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kb9f,Uid:e643c136-d256-4789-8dd6-aeac20459071,Namespace:kube-system,Attempt:0,} returns sandbox id \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\"" Mar 20 21:34:06.429366 kubelet[2628]: E0320 21:34:06.429334 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:06.431572 containerd[1513]: time="2025-03-20T21:34:06.431189578Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:34:06.440323 containerd[1513]: time="2025-03-20T21:34:06.440279136Z" level=info msg="Container e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:34:06.457264 containerd[1513]: time="2025-03-20T21:34:06.457218626Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\"" Mar 20 21:34:06.457712 containerd[1513]: time="2025-03-20T21:34:06.457676394Z" level=info msg="StartContainer for \"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\"" Mar 20 21:34:06.458424 containerd[1513]: time="2025-03-20T21:34:06.458400502Z" level=info msg="connecting to shim e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" protocol=ttrpc version=3 Mar 20 21:34:06.481773 systemd[1]: Started cri-containerd-e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44.scope - libcontainer container e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44. Mar 20 21:34:06.510981 containerd[1513]: time="2025-03-20T21:34:06.510936563Z" level=info msg="StartContainer for \"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\" returns successfully" Mar 20 21:34:06.519232 systemd[1]: cri-containerd-e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44.scope: Deactivated successfully. Mar 20 21:34:06.519847 containerd[1513]: time="2025-03-20T21:34:06.519756164Z" level=info msg="received exit event container_id:\"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\" id:\"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\" pid:4491 exited_at:{seconds:1742506446 nanos:519397517}" Mar 20 21:34:06.519903 containerd[1513]: time="2025-03-20T21:34:06.519888057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\" id:\"e6ee526179ca7a51c844b4a0620cbe4956db77f55455510b0f0c2b938f3caf44\" pid:4491 exited_at:{seconds:1742506446 nanos:519397517}" Mar 20 21:34:06.723021 kubelet[2628]: E0320 21:34:06.722899 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:06.725375 containerd[1513]: time="2025-03-20T21:34:06.724450771Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:34:06.732023 containerd[1513]: time="2025-03-20T21:34:06.731973594Z" level=info msg="Container 3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:34:06.738600 containerd[1513]: time="2025-03-20T21:34:06.738558941Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\"" Mar 20 21:34:06.739119 containerd[1513]: time="2025-03-20T21:34:06.739072055Z" level=info msg="StartContainer for \"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\"" Mar 20 21:34:06.740052 containerd[1513]: time="2025-03-20T21:34:06.739999434Z" level=info msg="connecting to shim 3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" protocol=ttrpc version=3 Mar 20 21:34:06.760783 systemd[1]: Started cri-containerd-3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5.scope - libcontainer container 3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5. Mar 20 21:34:06.790188 containerd[1513]: time="2025-03-20T21:34:06.790146565Z" level=info msg="StartContainer for \"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\" returns successfully" Mar 20 21:34:06.795720 systemd[1]: cri-containerd-3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5.scope: Deactivated successfully. Mar 20 21:34:06.796419 containerd[1513]: time="2025-03-20T21:34:06.796384154Z" level=info msg="received exit event container_id:\"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\" id:\"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\" pid:4536 exited_at:{seconds:1742506446 nanos:796183530}" Mar 20 21:34:06.796671 containerd[1513]: time="2025-03-20T21:34:06.796421946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\" id:\"3b63abea04db7d9c1ff2bd6c2864f7759142f3f29c65fbdeb2a418499dd69ca5\" pid:4536 exited_at:{seconds:1742506446 nanos:796183530}" Mar 20 21:34:07.726624 kubelet[2628]: E0320 21:34:07.726582 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:07.728281 containerd[1513]: time="2025-03-20T21:34:07.728215497Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:34:07.738132 containerd[1513]: time="2025-03-20T21:34:07.738075902Z" level=info msg="Container a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:34:07.751555 containerd[1513]: time="2025-03-20T21:34:07.751497201Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\"" Mar 20 21:34:07.752115 containerd[1513]: time="2025-03-20T21:34:07.752065250Z" level=info msg="StartContainer for \"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\"" Mar 20 21:34:07.753363 containerd[1513]: time="2025-03-20T21:34:07.753336545Z" level=info msg="connecting to shim a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" protocol=ttrpc version=3 Mar 20 21:34:07.775779 systemd[1]: Started cri-containerd-a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4.scope - libcontainer container a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4. Mar 20 21:34:07.816543 systemd[1]: cri-containerd-a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4.scope: Deactivated successfully. Mar 20 21:34:07.817480 containerd[1513]: time="2025-03-20T21:34:07.817448541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\" id:\"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\" pid:4580 exited_at:{seconds:1742506447 nanos:817087029}" Mar 20 21:34:07.817593 containerd[1513]: time="2025-03-20T21:34:07.817452249Z" level=info msg="received exit event container_id:\"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\" id:\"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\" pid:4580 exited_at:{seconds:1742506447 nanos:817087029}" Mar 20 21:34:07.817783 containerd[1513]: time="2025-03-20T21:34:07.817625962Z" level=info msg="StartContainer for \"a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4\" returns successfully" Mar 20 21:34:07.840226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a90bb240c343a9345c66422635e1efbd27d9f30d590a449a55fdeca09898a6e4-rootfs.mount: Deactivated successfully. Mar 20 21:34:08.503664 kubelet[2628]: E0320 21:34:08.502143 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:08.731129 kubelet[2628]: E0320 21:34:08.731096 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:08.732883 containerd[1513]: time="2025-03-20T21:34:08.732837591Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:34:08.740967 containerd[1513]: time="2025-03-20T21:34:08.740919999Z" level=info msg="Container 6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:34:08.754376 containerd[1513]: time="2025-03-20T21:34:08.754262464Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\"" Mar 20 21:34:08.754879 containerd[1513]: time="2025-03-20T21:34:08.754851432Z" level=info msg="StartContainer for \"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\"" Mar 20 21:34:08.755662 containerd[1513]: time="2025-03-20T21:34:08.755626856Z" level=info msg="connecting to shim 6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" protocol=ttrpc version=3 Mar 20 21:34:08.779765 systemd[1]: Started cri-containerd-6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140.scope - libcontainer container 6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140. Mar 20 21:34:08.804987 systemd[1]: cri-containerd-6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140.scope: Deactivated successfully. Mar 20 21:34:08.805568 containerd[1513]: time="2025-03-20T21:34:08.805524371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\" id:\"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\" pid:4619 exited_at:{seconds:1742506448 nanos:805247330}" Mar 20 21:34:08.807210 containerd[1513]: time="2025-03-20T21:34:08.807184420Z" level=info msg="received exit event container_id:\"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\" id:\"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\" pid:4619 exited_at:{seconds:1742506448 nanos:805247330}" Mar 20 21:34:08.814255 containerd[1513]: time="2025-03-20T21:34:08.814230224Z" level=info msg="StartContainer for \"6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140\" returns successfully" Mar 20 21:34:08.826512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6875b3362a24dcfc7637dcd6af513759f6329b971cb86fcb2401f7c02c4ab140-rootfs.mount: Deactivated successfully. Mar 20 21:34:09.569165 kubelet[2628]: E0320 21:34:09.569119 2628 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:34:09.735998 kubelet[2628]: E0320 21:34:09.735959 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:09.737448 containerd[1513]: time="2025-03-20T21:34:09.737399296Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:34:09.748269 containerd[1513]: time="2025-03-20T21:34:09.748223345Z" level=info msg="Container b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:34:09.756995 containerd[1513]: time="2025-03-20T21:34:09.756945340Z" level=info msg="CreateContainer within sandbox \"50ae6baedf6e9950478b3bb51102857ff7239e1b290288da4a9509ef08e5adbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\"" Mar 20 21:34:09.757467 containerd[1513]: time="2025-03-20T21:34:09.757438454Z" level=info msg="StartContainer for \"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\"" Mar 20 21:34:09.758312 containerd[1513]: time="2025-03-20T21:34:09.758291528Z" level=info msg="connecting to shim b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169" address="unix:///run/containerd/s/16cf63d2e0184498f983d3ebae4f6bd983bd116899bf33ec3b35a534341826f4" protocol=ttrpc version=3 Mar 20 21:34:09.779782 systemd[1]: Started cri-containerd-b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169.scope - libcontainer container b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169. Mar 20 21:34:09.815439 containerd[1513]: time="2025-03-20T21:34:09.815389472Z" level=info msg="StartContainer for \"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" returns successfully" Mar 20 21:34:09.882101 containerd[1513]: time="2025-03-20T21:34:09.882049361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"497b295a0e0fff2bd396f3503f1a669bc51814490a01f421ee5fb81385110148\" pid:4688 exited_at:{seconds:1742506449 nanos:881489189}" Mar 20 21:34:10.219686 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 20 21:34:10.741622 kubelet[2628]: E0320 21:34:10.741586 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:10.754350 kubelet[2628]: I0320 21:34:10.754274 2628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5kb9f" podStartSLOduration=4.754252275 podStartE2EDuration="4.754252275s" podCreationTimestamp="2025-03-20 21:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:34:10.754154728 +0000 UTC m=+86.331829949" watchObservedRunningTime="2025-03-20 21:34:10.754252275 +0000 UTC m=+86.331927516" Mar 20 21:34:12.362752 kubelet[2628]: E0320 21:34:12.362714 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:12.526536 containerd[1513]: time="2025-03-20T21:34:12.526488420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"687af0b736096f9f04655fa16fd05f2bfb82344b7ca5c96a66053c20e4772925\" pid:5033 exit_status:1 exited_at:{seconds:1742506452 nanos:526152918}" Mar 20 21:34:13.255631 systemd-networkd[1428]: lxc_health: Link UP Mar 20 21:34:13.266874 systemd-networkd[1428]: lxc_health: Gained carrier Mar 20 21:34:14.363603 kubelet[2628]: E0320 21:34:14.363571 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:14.500617 kubelet[2628]: E0320 21:34:14.500573 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:14.628071 containerd[1513]: time="2025-03-20T21:34:14.627923721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"22dc6c7c0be44879fdcf635d735f8592a94f6eea9a467fa18612af3696826d59\" pid:5256 exited_at:{seconds:1742506454 nanos:627327774}" Mar 20 21:34:14.748218 kubelet[2628]: E0320 21:34:14.748182 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:14.893750 systemd-networkd[1428]: lxc_health: Gained IPv6LL Mar 20 21:34:15.749752 kubelet[2628]: E0320 21:34:15.749711 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:16.726313 containerd[1513]: time="2025-03-20T21:34:16.726215704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"c21e1ab1bf80f8f5e974fdeefe42e83484f98c731685ae1ad95da48360e27280\" pid:5291 exited_at:{seconds:1742506456 nanos:725666726}" Mar 20 21:34:18.501354 kubelet[2628]: E0320 21:34:18.501277 2628 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:34:18.807034 containerd[1513]: time="2025-03-20T21:34:18.806884499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"ac33119a1a29c14520db46448697f3cda45c44f27d1d94995aee7f36d2a98d86\" pid:5317 exited_at:{seconds:1742506458 nanos:806358827}" Mar 20 21:34:20.889250 containerd[1513]: time="2025-03-20T21:34:20.889200440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e7e9d58a8713f7116abb2cb1e7885e743c5f0f6846705d8ae3d17eb9513169\" id:\"46f344aec8b3650388aa64afaa8ec54f8d14da4db73fd14b32cc8f953ae2e32b\" pid:5341 exited_at:{seconds:1742506460 nanos:888891582}" Mar 20 21:34:20.903253 sshd[4427]: Connection closed by 10.0.0.1 port 52250 Mar 20 21:34:20.903764 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Mar 20 21:34:20.907436 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:52250.service: Deactivated successfully. Mar 20 21:34:20.909338 systemd[1]: session-28.scope: Deactivated successfully. Mar 20 21:34:20.909990 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. Mar 20 21:34:20.910897 systemd-logind[1496]: Removed session 28.