May 13 23:47:18.909742 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:47:18.909772 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:47:18.909782 kernel: BIOS-provided physical RAM map: May 13 23:47:18.909788 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 13 23:47:18.909795 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 13 23:47:18.909804 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 13 23:47:18.909812 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 13 23:47:18.909819 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 13 23:47:18.909825 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 13 23:47:18.909832 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 13 23:47:18.909839 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 13 23:47:18.909846 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 13 23:47:18.909852 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 13 23:47:18.909859 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 13 23:47:18.909870 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 13 23:47:18.909877 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 13 23:47:18.909884 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:47:18.909891 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:47:18.909898 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:47:18.909917 kernel: NX (Execute Disable) protection: active May 13 23:47:18.909925 kernel: APIC: Static calls initialized May 13 23:47:18.909932 kernel: e820: update [mem 0x9a186018-0x9a18fc57] usable ==> usable May 13 23:47:18.909940 kernel: e820: update [mem 0x9a186018-0x9a18fc57] usable ==> usable May 13 23:47:18.909947 kernel: e820: update [mem 0x9a149018-0x9a185e57] usable ==> usable May 13 23:47:18.909954 kernel: e820: update [mem 0x9a149018-0x9a185e57] usable ==> usable May 13 23:47:18.909960 kernel: extended physical RAM map: May 13 23:47:18.909968 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 13 23:47:18.909975 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 13 23:47:18.909982 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 13 23:47:18.909989 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 13 23:47:18.909999 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a149017] usable May 13 23:47:18.910006 kernel: reserve setup_data: [mem 0x000000009a149018-0x000000009a185e57] usable May 13 23:47:18.910013 kernel: reserve setup_data: [mem 0x000000009a185e58-0x000000009a186017] usable May 13 23:47:18.910020 kernel: reserve setup_data: [mem 0x000000009a186018-0x000000009a18fc57] usable May 13 23:47:18.910027 kernel: reserve setup_data: [mem 0x000000009a18fc58-0x000000009b8ecfff] usable May 13 23:47:18.910034 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 13 23:47:18.910041 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 13 23:47:18.910048 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 13 23:47:18.910055 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 13 23:47:18.910063 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 13 23:47:18.910076 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 13 23:47:18.910083 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 13 23:47:18.910090 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 13 23:47:18.910098 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:47:18.910105 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:47:18.910113 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:47:18.910122 kernel: efi: EFI v2.7 by EDK II May 13 23:47:18.910130 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 May 13 23:47:18.910138 kernel: random: crng init done May 13 23:47:18.910145 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:47:18.910153 kernel: secureboot: Secure boot enabled May 13 23:47:18.910172 kernel: SMBIOS 2.8 present. May 13 23:47:18.910179 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 13 23:47:18.910187 kernel: Hypervisor detected: KVM May 13 23:47:18.910194 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:47:18.910202 kernel: kvm-clock: using sched offset of 4304848240 cycles May 13 23:47:18.910209 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:47:18.910220 kernel: tsc: Detected 2794.746 MHz processor May 13 23:47:18.910228 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:47:18.910236 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:47:18.910244 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 13 23:47:18.910252 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:47:18.910259 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:47:18.910267 kernel: Using GB pages for direct mapping May 13 23:47:18.910275 kernel: ACPI: Early table checksum verification disabled May 13 23:47:18.910282 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 13 23:47:18.910292 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 23:47:18.910300 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910308 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910315 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 13 23:47:18.910323 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910331 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910339 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910346 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:18.910354 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 23:47:18.910364 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 13 23:47:18.910372 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] May 13 23:47:18.910379 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 13 23:47:18.910387 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 13 23:47:18.910395 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 13 23:47:18.910402 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 13 23:47:18.910410 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 13 23:47:18.910417 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 13 23:47:18.910425 kernel: No NUMA configuration found May 13 23:47:18.910435 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 13 23:47:18.910443 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] May 13 23:47:18.910450 kernel: Zone ranges: May 13 23:47:18.910458 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:47:18.910466 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 13 23:47:18.910473 kernel: Normal empty May 13 23:47:18.910481 kernel: Movable zone start for each node May 13 23:47:18.910489 kernel: Early memory node ranges May 13 23:47:18.910496 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 13 23:47:18.910504 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 13 23:47:18.910513 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 13 23:47:18.910521 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 13 23:47:18.910529 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 13 23:47:18.910536 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 13 23:47:18.910544 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:47:18.910551 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 13 23:47:18.910559 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:47:18.910567 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 13 23:47:18.910575 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 13 23:47:18.910585 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 13 23:47:18.910601 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:47:18.910609 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:47:18.910625 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:47:18.910640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:47:18.910656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:47:18.910664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:47:18.910686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:47:18.910702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:47:18.910720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:47:18.910736 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:47:18.910752 kernel: TSC deadline timer available May 13 23:47:18.910762 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 23:47:18.910771 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:47:18.910780 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 23:47:18.910797 kernel: kvm-guest: setup PV sched yield May 13 23:47:18.910807 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 13 23:47:18.910815 kernel: Booting paravirtualized kernel on KVM May 13 23:47:18.910823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:47:18.910831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 23:47:18.910839 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 23:47:18.910850 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 23:47:18.910857 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 23:47:18.910865 kernel: kvm-guest: PV spinlocks enabled May 13 23:47:18.910873 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:47:18.910882 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:47:18.910891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:47:18.910899 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:47:18.910907 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:47:18.910918 kernel: Fallback order for Node 0: 0 May 13 23:47:18.910926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 May 13 23:47:18.910934 kernel: Policy zone: DMA32 May 13 23:47:18.910942 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:47:18.910950 kernel: Memory: 2368308K/2552216K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 183652K reserved, 0K cma-reserved) May 13 23:47:18.910960 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:47:18.910968 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:47:18.910976 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:47:18.910984 kernel: Dynamic Preempt: voluntary May 13 23:47:18.910992 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:47:18.911001 kernel: rcu: RCU event tracing is enabled. May 13 23:47:18.911009 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:47:18.911017 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:47:18.911026 kernel: Rude variant of Tasks RCU enabled. May 13 23:47:18.911036 kernel: Tracing variant of Tasks RCU enabled. May 13 23:47:18.911044 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:47:18.911052 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:47:18.911059 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 23:47:18.911067 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:47:18.911076 kernel: Console: colour dummy device 80x25 May 13 23:47:18.911084 kernel: printk: console [ttyS0] enabled May 13 23:47:18.911091 kernel: ACPI: Core revision 20230628 May 13 23:47:18.911100 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:47:18.911110 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:47:18.911118 kernel: x2apic enabled May 13 23:47:18.911126 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:47:18.911134 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 23:47:18.911142 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 23:47:18.911150 kernel: kvm-guest: setup PV IPIs May 13 23:47:18.911292 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:47:18.911302 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 23:47:18.911310 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 23:47:18.911321 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 23:47:18.911329 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 23:47:18.911337 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 23:47:18.911345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:47:18.911353 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:47:18.911361 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:47:18.911369 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 23:47:18.911377 kernel: RETBleed: Mitigation: untrained return thunk May 13 23:47:18.911385 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:47:18.911396 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:47:18.911404 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 23:47:18.911413 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 23:47:18.911421 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 23:47:18.911429 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:47:18.911437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:47:18.911445 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:47:18.911452 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:47:18.911463 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 23:47:18.911471 kernel: Freeing SMP alternatives memory: 32K May 13 23:47:18.911479 kernel: pid_max: default: 32768 minimum: 301 May 13 23:47:18.911487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:47:18.911495 kernel: landlock: Up and running. May 13 23:47:18.911503 kernel: SELinux: Initializing. May 13 23:47:18.911511 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:18.911519 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:18.911527 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 23:47:18.911537 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:18.911545 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:18.911553 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:18.911561 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 23:47:18.911569 kernel: ... version: 0 May 13 23:47:18.911577 kernel: ... bit width: 48 May 13 23:47:18.911585 kernel: ... generic registers: 6 May 13 23:47:18.911593 kernel: ... value mask: 0000ffffffffffff May 13 23:47:18.911601 kernel: ... max period: 00007fffffffffff May 13 23:47:18.911611 kernel: ... fixed-purpose events: 0 May 13 23:47:18.911618 kernel: ... event mask: 000000000000003f May 13 23:47:18.911626 kernel: signal: max sigframe size: 1776 May 13 23:47:18.911634 kernel: rcu: Hierarchical SRCU implementation. May 13 23:47:18.911642 kernel: rcu: Max phase no-delay instances is 400. May 13 23:47:18.911650 kernel: smp: Bringing up secondary CPUs ... May 13 23:47:18.911658 kernel: smpboot: x86: Booting SMP configuration: May 13 23:47:18.911666 kernel: .... node #0, CPUs: #1 #2 #3 May 13 23:47:18.911674 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:47:18.911681 kernel: smpboot: Max logical packages: 1 May 13 23:47:18.911692 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 23:47:18.911700 kernel: devtmpfs: initialized May 13 23:47:18.911708 kernel: x86/mm: Memory block size: 128MB May 13 23:47:18.911716 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 13 23:47:18.911724 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 13 23:47:18.911732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:47:18.911740 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:47:18.911756 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:47:18.911769 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:47:18.911777 kernel: audit: initializing netlink subsys (disabled) May 13 23:47:18.911792 kernel: audit: type=2000 audit(1747180038.781:1): state=initialized audit_enabled=0 res=1 May 13 23:47:18.911811 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:47:18.911826 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:47:18.911841 kernel: cpuidle: using governor menu May 13 23:47:18.911856 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:47:18.911874 kernel: dca service started, version 1.12.1 May 13 23:47:18.911889 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 13 23:47:18.911914 kernel: PCI: Using configuration type 1 for base access May 13 23:47:18.911929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:47:18.911947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:47:18.911962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:47:18.911974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:47:18.911982 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:47:18.911990 kernel: ACPI: Added _OSI(Module Device) May 13 23:47:18.911998 kernel: ACPI: Added _OSI(Processor Device) May 13 23:47:18.912006 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:47:18.912016 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:47:18.912024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:47:18.912032 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:47:18.912039 kernel: ACPI: Interpreter enabled May 13 23:47:18.912047 kernel: ACPI: PM: (supports S0 S5) May 13 23:47:18.912055 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:47:18.912063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:47:18.912071 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:47:18.912079 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 23:47:18.912090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:47:18.912334 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:47:18.912476 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 23:47:18.912604 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 23:47:18.912614 kernel: PCI host bridge to bus 0000:00 May 13 23:47:18.912753 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:47:18.912873 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:47:18.912994 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:47:18.913112 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 13 23:47:18.913273 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 13 23:47:18.913430 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 13 23:47:18.913557 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:47:18.913697 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 23:47:18.913847 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 23:47:18.913975 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 23:47:18.914103 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 23:47:18.914247 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 23:47:18.914408 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 23:47:18.914536 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:47:18.914671 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:47:18.914824 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 23:47:18.914951 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 23:47:18.915077 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 13 23:47:18.915230 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 23:47:18.915384 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 23:47:18.915513 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 23:47:18.915640 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 13 23:47:18.915791 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:47:18.915920 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 23:47:18.916046 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 23:47:18.916187 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 13 23:47:18.916317 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 23:47:18.916452 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 23:47:18.916579 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 23:47:18.916723 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 23:47:18.916859 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 23:47:18.916985 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 23:47:18.917118 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 23:47:18.917261 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 23:47:18.917273 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:47:18.917281 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:47:18.917293 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:47:18.917301 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:47:18.917309 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 23:47:18.917317 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 23:47:18.917325 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 23:47:18.917333 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 23:47:18.917341 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 23:47:18.917349 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 23:47:18.917357 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 23:47:18.917367 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 23:47:18.917375 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 23:47:18.917383 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 23:47:18.917391 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 23:47:18.917399 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 23:47:18.917407 kernel: iommu: Default domain type: Translated May 13 23:47:18.917415 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:47:18.917423 kernel: efivars: Registered efivars operations May 13 23:47:18.917431 kernel: PCI: Using ACPI for IRQ routing May 13 23:47:18.917441 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:47:18.917449 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 13 23:47:18.917457 kernel: e820: reserve RAM buffer [mem 0x9a149018-0x9bffffff] May 13 23:47:18.917464 kernel: e820: reserve RAM buffer [mem 0x9a186018-0x9bffffff] May 13 23:47:18.917472 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 13 23:47:18.917480 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 13 23:47:18.917606 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 23:47:18.917732 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 23:47:18.917871 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:47:18.917883 kernel: vgaarb: loaded May 13 23:47:18.917891 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:47:18.917899 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:47:18.917907 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:47:18.917915 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:47:18.917923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:47:18.917932 kernel: pnp: PnP ACPI init May 13 23:47:18.918074 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 13 23:47:18.918089 kernel: pnp: PnP ACPI: found 6 devices May 13 23:47:18.918097 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:47:18.918105 kernel: NET: Registered PF_INET protocol family May 13 23:47:18.918113 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:47:18.918121 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:47:18.918129 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:47:18.918137 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:47:18.918145 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:47:18.918208 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:47:18.918217 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:18.918226 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:18.918234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:47:18.918242 kernel: NET: Registered PF_XDP protocol family May 13 23:47:18.918373 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 23:47:18.918498 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 23:47:18.918616 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:47:18.918735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:47:18.918859 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:47:18.918972 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 13 23:47:18.919086 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 13 23:47:18.919215 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 13 23:47:18.919226 kernel: PCI: CLS 0 bytes, default 64 May 13 23:47:18.919234 kernel: Initialise system trusted keyrings May 13 23:47:18.919242 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:47:18.919250 kernel: Key type asymmetric registered May 13 23:47:18.919262 kernel: Asymmetric key parser 'x509' registered May 13 23:47:18.919270 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:47:18.919278 kernel: io scheduler mq-deadline registered May 13 23:47:18.919286 kernel: io scheduler kyber registered May 13 23:47:18.919294 kernel: io scheduler bfq registered May 13 23:47:18.919302 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:47:18.919311 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 23:47:18.919334 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 23:47:18.919344 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 23:47:18.919355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:47:18.919363 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:47:18.919372 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:47:18.919380 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:47:18.919388 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:47:18.919519 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 23:47:18.919534 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:47:18.919652 kernel: rtc_cmos 00:04: registered as rtc0 May 13 23:47:18.919783 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T23:47:18 UTC (1747180038) May 13 23:47:18.919903 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 23:47:18.919914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 23:47:18.919922 kernel: efifb: probing for efifb May 13 23:47:18.919931 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 23:47:18.919939 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 23:47:18.919947 kernel: efifb: scrolling: redraw May 13 23:47:18.919955 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:47:18.919964 kernel: Console: switching to colour frame buffer device 160x50 May 13 23:47:18.919975 kernel: fb0: EFI VGA frame buffer device May 13 23:47:18.919983 kernel: pstore: Using crash dump compression: deflate May 13 23:47:18.919991 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:47:18.920000 kernel: NET: Registered PF_INET6 protocol family May 13 23:47:18.920008 kernel: Segment Routing with IPv6 May 13 23:47:18.920016 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:47:18.920024 kernel: NET: Registered PF_PACKET protocol family May 13 23:47:18.920032 kernel: Key type dns_resolver registered May 13 23:47:18.920040 kernel: IPI shorthand broadcast: enabled May 13 23:47:18.920051 kernel: sched_clock: Marking stable (618002240, 133578549)->(772931841, -21351052) May 13 23:47:18.920062 kernel: registered taskstats version 1 May 13 23:47:18.920070 kernel: Loading compiled-in X.509 certificates May 13 23:47:18.920078 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:47:18.920087 kernel: Key type .fscrypt registered May 13 23:47:18.920097 kernel: Key type fscrypt-provisioning registered May 13 23:47:18.920105 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:47:18.920114 kernel: ima: Allocated hash algorithm: sha1 May 13 23:47:18.920122 kernel: ima: No architecture policies found May 13 23:47:18.920130 kernel: clk: Disabling unused clocks May 13 23:47:18.920138 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:47:18.920147 kernel: Write protecting the kernel read-only data: 40960k May 13 23:47:18.920219 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:47:18.920229 kernel: Run /init as init process May 13 23:47:18.920240 kernel: with arguments: May 13 23:47:18.920248 kernel: /init May 13 23:47:18.920257 kernel: with environment: May 13 23:47:18.920265 kernel: HOME=/ May 13 23:47:18.920273 kernel: TERM=linux May 13 23:47:18.920281 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:47:18.920290 systemd[1]: Successfully made /usr/ read-only. May 13 23:47:18.920302 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:47:18.920314 systemd[1]: Detected virtualization kvm. May 13 23:47:18.920322 systemd[1]: Detected architecture x86-64. May 13 23:47:18.920331 systemd[1]: Running in initrd. May 13 23:47:18.920339 systemd[1]: No hostname configured, using default hostname. May 13 23:47:18.920348 systemd[1]: Hostname set to . May 13 23:47:18.920357 systemd[1]: Initializing machine ID from VM UUID. May 13 23:47:18.920366 systemd[1]: Queued start job for default target initrd.target. May 13 23:47:18.920375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:18.920387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:18.920396 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:47:18.920405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:47:18.920414 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:47:18.920424 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:47:18.920435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:47:18.920444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:47:18.920455 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:18.920464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:18.920473 systemd[1]: Reached target paths.target - Path Units. May 13 23:47:18.920482 systemd[1]: Reached target slices.target - Slice Units. May 13 23:47:18.920491 systemd[1]: Reached target swap.target - Swaps. May 13 23:47:18.920500 systemd[1]: Reached target timers.target - Timer Units. May 13 23:47:18.920508 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:47:18.920517 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:47:18.920528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:47:18.920537 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:47:18.920546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:18.920555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:47:18.920564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:18.920573 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:47:18.920581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:47:18.920590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:47:18.920599 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:47:18.920610 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:47:18.920619 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:47:18.920628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:47:18.920637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:18.920646 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:47:18.920655 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:18.920666 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:47:18.920696 systemd-journald[192]: Collecting audit messages is disabled. May 13 23:47:18.920720 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:47:18.920730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:18.920739 systemd-journald[192]: Journal started May 13 23:47:18.920767 systemd-journald[192]: Runtime Journal (/run/log/journal/892e6d30d51742f1a14bc99a0ab72fdd) is 6M, max 47.9M, 41.9M free. May 13 23:47:18.912273 systemd-modules-load[193]: Inserted module 'overlay' May 13 23:47:18.926552 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:47:18.927137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:47:18.933462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:18.936860 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:47:18.941099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:47:18.943582 systemd-modules-load[193]: Inserted module 'br_netfilter' May 13 23:47:18.946773 kernel: Bridge firewalling registered May 13 23:47:18.944344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:47:18.944847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:47:18.948945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:47:18.952555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:18.962364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:18.965663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:18.969007 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:47:18.971733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:18.981848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:47:18.991872 dracut-cmdline[229]: dracut-dracut-053 May 13 23:47:19.000172 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:47:19.042578 systemd-resolved[232]: Positive Trust Anchors: May 13 23:47:19.042602 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:47:19.042640 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:47:19.053750 systemd-resolved[232]: Defaulting to hostname 'linux'. May 13 23:47:19.055683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:47:19.058184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:19.082195 kernel: SCSI subsystem initialized May 13 23:47:19.091184 kernel: Loading iSCSI transport class v2.0-870. May 13 23:47:19.103189 kernel: iscsi: registered transport (tcp) May 13 23:47:19.124191 kernel: iscsi: registered transport (qla4xxx) May 13 23:47:19.124214 kernel: QLogic iSCSI HBA Driver May 13 23:47:19.172408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:47:19.175825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:47:19.210658 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:47:19.210688 kernel: device-mapper: uevent: version 1.0.3 May 13 23:47:19.211705 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:47:19.252192 kernel: raid6: avx2x4 gen() 30023 MB/s May 13 23:47:19.269188 kernel: raid6: avx2x2 gen() 30583 MB/s May 13 23:47:19.286281 kernel: raid6: avx2x1 gen() 25918 MB/s May 13 23:47:19.286302 kernel: raid6: using algorithm avx2x2 gen() 30583 MB/s May 13 23:47:19.304289 kernel: raid6: .... xor() 19845 MB/s, rmw enabled May 13 23:47:19.304309 kernel: raid6: using avx2x2 recovery algorithm May 13 23:47:19.325185 kernel: xor: automatically using best checksumming function avx May 13 23:47:19.472185 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:47:19.485376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:47:19.487118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:19.516996 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 13 23:47:19.522408 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:19.526415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:47:19.560473 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 13 23:47:19.595260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:47:19.596623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:47:19.679179 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:19.683906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:47:19.707722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:47:19.710942 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:47:19.713664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:19.717628 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 23:47:19.721213 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:47:19.721230 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:47:19.717874 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:47:19.721503 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:47:19.734647 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:47:19.734678 kernel: AES CTR mode by8 optimization enabled May 13 23:47:19.740804 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:47:19.740828 kernel: GPT:9289727 != 19775487 May 13 23:47:19.740847 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:47:19.740861 kernel: GPT:9289727 != 19775487 May 13 23:47:19.742758 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:47:19.742797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:19.755030 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:47:19.763213 kernel: libata version 3.00 loaded. May 13 23:47:19.770534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:47:19.772642 kernel: ahci 0000:00:1f.2: version 3.0 May 13 23:47:19.774429 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 23:47:19.770889 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:19.777473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:19.785958 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 23:47:19.786221 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 23:47:19.780993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:19.781182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:19.786971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:19.792926 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) May 13 23:47:19.792946 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (467) May 13 23:47:19.792961 kernel: scsi host0: ahci May 13 23:47:19.797183 kernel: scsi host1: ahci May 13 23:47:19.797603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:19.801360 kernel: scsi host2: ahci May 13 23:47:19.801574 kernel: scsi host3: ahci May 13 23:47:19.803189 kernel: scsi host4: ahci May 13 23:47:19.803428 kernel: scsi host5: ahci May 13 23:47:19.804780 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 23:47:19.804801 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 23:47:19.806776 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 23:47:19.806794 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 23:47:19.808778 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 23:47:19.808800 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 23:47:19.821603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:47:19.838908 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:47:19.856168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:47:19.864451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:47:19.865854 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:47:19.869996 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:47:19.871661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:19.871715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:19.875058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:19.882708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:19.891666 disk-uuid[570]: Primary Header is updated. May 13 23:47:19.891666 disk-uuid[570]: Secondary Entries is updated. May 13 23:47:19.891666 disk-uuid[570]: Secondary Header is updated. May 13 23:47:19.896183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:19.901188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:19.911462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:19.917477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:19.947904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:20.120186 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 23:47:20.120270 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 23:47:20.120287 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 23:47:20.120300 kernel: ata3.00: applying bridge limits May 13 23:47:20.121317 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:47:20.122187 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:47:20.123181 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 23:47:20.123199 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:47:20.124189 kernel: ata3.00: configured for UDMA/100 May 13 23:47:20.125188 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 23:47:20.165194 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 23:47:20.165529 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:47:20.179382 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 23:47:20.902190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:20.902491 disk-uuid[571]: The operation has completed successfully. May 13 23:47:20.936525 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:47:20.937553 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:47:20.972071 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:47:20.986364 sh[599]: Success May 13 23:47:20.998185 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 23:47:21.033854 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:47:21.037555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:47:21.052004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:47:21.062650 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:47:21.062686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:47:21.062706 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:47:21.063973 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:47:21.065626 kernel: BTRFS info (device dm-0): using free space tree May 13 23:47:21.069723 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:47:21.070408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:47:21.071302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:47:21.076787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:47:21.100446 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:47:21.100483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:47:21.100504 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:21.103183 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:21.107182 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:47:21.114091 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:47:21.115379 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:47:21.281245 ignition[693]: Ignition 2.20.0 May 13 23:47:21.281256 ignition[693]: Stage: fetch-offline May 13 23:47:21.281291 ignition[693]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:21.281305 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:21.281404 ignition[693]: parsed url from cmdline: "" May 13 23:47:21.281408 ignition[693]: no config URL provided May 13 23:47:21.281413 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:47:21.281422 ignition[693]: no config at "/usr/lib/ignition/user.ign" May 13 23:47:21.281449 ignition[693]: op(1): [started] loading QEMU firmware config module May 13 23:47:21.281454 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:47:21.290057 ignition[693]: op(1): [finished] loading QEMU firmware config module May 13 23:47:21.317621 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:47:21.321381 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:47:21.331760 ignition[693]: parsing config with SHA512: b3d5540cd38b2f5f629564466e9186a47625977820a778be6592ab4c5f63d225a92771298efe3c264067886e913ff4fea8e71305afbe645f30972b65d5d4eaa3 May 13 23:47:21.337857 unknown[693]: fetched base config from "system" May 13 23:47:21.338776 ignition[693]: fetch-offline: fetch-offline passed May 13 23:47:21.337878 unknown[693]: fetched user config from "qemu" May 13 23:47:21.338914 ignition[693]: Ignition finished successfully May 13 23:47:21.341766 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:47:21.367994 systemd-networkd[786]: lo: Link UP May 13 23:47:21.368006 systemd-networkd[786]: lo: Gained carrier May 13 23:47:21.371108 systemd-networkd[786]: Enumeration completed May 13 23:47:21.371276 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:47:21.371609 systemd[1]: Reached target network.target - Network. May 13 23:47:21.371913 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:47:21.372934 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:47:21.379619 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:21.379627 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:47:21.383673 systemd-networkd[786]: eth0: Link UP May 13 23:47:21.383679 systemd-networkd[786]: eth0: Gained carrier May 13 23:47:21.383686 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:21.401246 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:47:21.402964 ignition[791]: Ignition 2.20.0 May 13 23:47:21.402975 ignition[791]: Stage: kargs May 13 23:47:21.403176 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:21.403191 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:21.404081 ignition[791]: kargs: kargs passed May 13 23:47:21.404122 ignition[791]: Ignition finished successfully May 13 23:47:21.407997 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:47:21.409116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:47:21.445367 ignition[800]: Ignition 2.20.0 May 13 23:47:21.445378 ignition[800]: Stage: disks May 13 23:47:21.445522 ignition[800]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:21.445534 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:21.446353 ignition[800]: disks: disks passed May 13 23:47:21.448862 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:47:21.446397 ignition[800]: Ignition finished successfully May 13 23:47:21.450253 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:47:21.451752 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:47:21.453908 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:47:21.454944 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:47:21.456039 systemd[1]: Reached target basic.target - Basic System. May 13 23:47:21.458542 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:47:21.483502 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:47:21.489995 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:47:21.491063 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:47:21.587178 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:47:21.587587 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:47:21.588995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:47:21.591514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:47:21.593490 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:47:21.593801 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:47:21.593840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:47:21.593864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:47:21.612110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:47:21.614516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:47:21.617670 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) May 13 23:47:21.617699 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:47:21.619632 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:47:21.619649 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:21.623183 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:21.628701 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:47:21.689145 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:47:21.693640 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory May 13 23:47:21.698423 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:47:21.702798 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:47:21.843505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:47:21.844585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:47:21.845447 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:47:21.868188 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:47:21.880452 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:47:21.892225 ignition[933]: INFO : Ignition 2.20.0 May 13 23:47:21.892225 ignition[933]: INFO : Stage: mount May 13 23:47:21.893845 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:21.893845 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:21.893845 ignition[933]: INFO : mount: mount passed May 13 23:47:21.893845 ignition[933]: INFO : Ignition finished successfully May 13 23:47:21.899382 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:47:21.901512 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:47:22.061807 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:47:22.064355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:47:22.086874 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) May 13 23:47:22.086905 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:47:22.086917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:47:22.087742 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:22.091177 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:22.092273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:47:22.120753 ignition[963]: INFO : Ignition 2.20.0 May 13 23:47:22.120753 ignition[963]: INFO : Stage: files May 13 23:47:22.122868 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:22.122868 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:22.122868 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 13 23:47:22.122868 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:47:22.122868 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:47:22.129547 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:47:22.129547 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:47:22.129547 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:47:22.129547 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:47:22.129547 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 23:47:22.126301 unknown[963]: wrote ssh authorized keys file for user: core May 13 23:47:22.224715 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:47:22.426784 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:47:22.426784 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:47:22.430623 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 23:47:22.456354 systemd-networkd[786]: eth0: Gained IPv6LL May 13 23:47:23.000291 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:47:23.242766 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:47:23.245038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:47:23.245038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:47:23.245038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:47:23.250077 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:47:23.250077 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:47:23.253470 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:47:23.253470 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:47:23.256908 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:47:23.258803 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:47:23.260672 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:47:23.262410 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:47:23.264949 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:47:23.264949 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:47:23.269527 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 23:47:23.806053 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:47:25.383667 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:47:25.383667 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 23:47:25.388011 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:47:25.409042 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:47:25.413851 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:47:25.415630 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:47:25.415630 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 23:47:25.415630 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:47:25.415630 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:47:25.415630 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:47:25.415630 ignition[963]: INFO : files: files passed May 13 23:47:25.415630 ignition[963]: INFO : Ignition finished successfully May 13 23:47:25.417191 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:47:25.419517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:47:25.422001 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:47:25.441057 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:47:25.441175 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:47:25.445328 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:47:25.446984 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:25.446984 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:25.452429 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:25.448145 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:47:25.450045 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:47:25.453110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:47:25.504725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:47:25.505809 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:47:25.508686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:47:25.511027 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:47:25.513063 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:47:25.515303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:47:25.546612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:47:25.550553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:47:25.568173 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:25.570726 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:25.573406 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:47:25.575518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:47:25.576724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:47:25.579632 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:47:25.581906 systemd[1]: Stopped target basic.target - Basic System. May 13 23:47:25.583953 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:47:25.586475 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:47:25.589105 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:47:25.591575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:47:25.593913 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:47:25.596714 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:47:25.598803 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:47:25.600844 systemd[1]: Stopped target swap.target - Swaps. May 13 23:47:25.602476 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:47:25.603516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:47:25.605873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:25.608173 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:25.610584 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:47:25.611609 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:25.614279 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:47:25.615326 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:47:25.617606 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:47:25.618721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:47:25.621115 systemd[1]: Stopped target paths.target - Path Units. May 13 23:47:25.622905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:47:25.627211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:25.629950 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:47:25.631831 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:47:25.633763 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:47:25.634664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:47:25.636667 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:47:25.637576 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:47:25.639688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:47:25.640885 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:47:25.643428 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:47:25.644450 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:47:25.647327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:47:25.649229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:47:25.650266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:25.661768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:47:25.663918 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:47:25.665253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:25.667978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:47:25.669254 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:47:25.674831 ignition[1019]: INFO : Ignition 2.20.0 May 13 23:47:25.675926 ignition[1019]: INFO : Stage: umount May 13 23:47:25.675926 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:25.675926 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:25.679556 ignition[1019]: INFO : umount: umount passed May 13 23:47:25.679556 ignition[1019]: INFO : Ignition finished successfully May 13 23:47:25.681920 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:47:25.683033 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:47:25.685368 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:47:25.686397 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:47:25.690865 systemd[1]: Stopped target network.target - Network. May 13 23:47:25.691892 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:47:25.692756 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:47:25.694976 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:47:25.695933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:47:25.698747 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:47:25.698801 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:47:25.701716 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:47:25.702754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:47:25.704916 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:47:25.707098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:47:25.710131 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:47:25.711637 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:47:25.712626 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:47:25.714704 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:47:25.715721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:47:25.719823 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:47:25.721302 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:47:25.722319 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:47:25.725820 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:47:25.729315 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:47:25.730492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:25.733143 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:47:25.734373 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:47:25.737942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:47:25.740214 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:47:25.741398 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:47:25.744452 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:47:25.745409 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:25.747542 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:47:25.747602 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:47:25.750683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:47:25.750731 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:25.754263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:25.757623 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:47:25.758871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:47:25.776358 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:47:25.777397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:25.780351 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:47:25.781355 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:47:25.783858 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:47:25.784891 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:47:25.787047 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:47:25.787091 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:25.790045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:47:25.790961 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:47:25.793085 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:47:25.794052 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:47:25.796144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:47:25.797116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:25.800484 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:47:25.802721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:47:25.802777 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:25.806268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:25.806320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:25.810454 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:47:25.811870 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:47:25.820324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:47:25.821485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:47:25.823950 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:47:25.826657 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:47:25.847829 systemd[1]: Switching root. May 13 23:47:25.887681 systemd-journald[192]: Journal stopped May 13 23:47:27.086523 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 13 23:47:27.086604 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:47:27.086618 kernel: SELinux: policy capability open_perms=1 May 13 23:47:27.086632 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:47:27.086643 kernel: SELinux: policy capability always_check_network=0 May 13 23:47:27.086655 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:47:27.086677 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:47:27.086695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:47:27.086717 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:47:27.086728 kernel: audit: type=1403 audit(1747180046.275:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:47:27.086745 systemd[1]: Successfully loaded SELinux policy in 38.817ms. May 13 23:47:27.086760 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.948ms. May 13 23:47:27.086773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:47:27.086785 systemd[1]: Detected virtualization kvm. May 13 23:47:27.086797 systemd[1]: Detected architecture x86-64. May 13 23:47:27.086812 systemd[1]: Detected first boot. May 13 23:47:27.086824 systemd[1]: Initializing machine ID from VM UUID. May 13 23:47:27.086836 zram_generator::config[1065]: No configuration found. May 13 23:47:27.086850 kernel: Guest personality initialized and is inactive May 13 23:47:27.086867 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:47:27.086879 kernel: Initialized host personality May 13 23:47:27.086890 kernel: NET: Registered PF_VSOCK protocol family May 13 23:47:27.086902 systemd[1]: Populated /etc with preset unit settings. May 13 23:47:27.086917 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:47:27.086930 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:47:27.086942 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:47:27.086954 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:47:27.086966 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:47:27.086980 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:47:27.086992 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:47:27.087005 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:47:27.087017 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:47:27.087032 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:47:27.087044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:47:27.087057 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:47:27.087069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:27.087082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:27.087094 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:47:27.087106 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:47:27.087118 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:47:27.087134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:47:27.087146 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:47:27.087173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:27.087185 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:47:27.087198 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:47:27.087210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:47:27.087223 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:47:27.087235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:27.087250 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:47:27.087262 systemd[1]: Reached target slices.target - Slice Units. May 13 23:47:27.087274 systemd[1]: Reached target swap.target - Swaps. May 13 23:47:27.087287 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:47:27.087300 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:47:27.087313 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:47:27.087326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:27.087339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:47:27.087351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:27.087363 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:47:27.087378 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:47:27.087390 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:47:27.087402 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:47:27.087414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:27.087427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:47:27.087439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:47:27.087451 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:47:27.087464 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:47:27.087479 systemd[1]: Reached target machines.target - Containers. May 13 23:47:27.087491 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:47:27.087503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:47:27.087516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:47:27.087528 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:47:27.087540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:47:27.087553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:47:27.087573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:47:27.087588 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:47:27.087600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:47:27.087613 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:47:27.087626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:47:27.087638 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:47:27.087651 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:47:27.087662 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:47:27.087675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:47:27.087688 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:47:27.087702 kernel: fuse: init (API version 7.39) May 13 23:47:27.087714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:47:27.087726 kernel: loop: module loaded May 13 23:47:27.087739 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:47:27.087752 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:47:27.087764 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:47:27.087777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:47:27.087789 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:47:27.087801 systemd[1]: Stopped verity-setup.service. May 13 23:47:27.087816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:27.087828 kernel: ACPI: bus type drm_connector registered May 13 23:47:27.087840 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:47:27.087852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:47:27.087866 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:47:27.087878 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:47:27.087890 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:47:27.087903 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:47:27.087918 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:47:27.087930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:27.087962 systemd-journald[1143]: Collecting audit messages is disabled. May 13 23:47:27.087984 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:47:27.087997 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:47:27.088010 systemd-journald[1143]: Journal started May 13 23:47:27.088033 systemd-journald[1143]: Runtime Journal (/run/log/journal/892e6d30d51742f1a14bc99a0ab72fdd) is 6M, max 47.9M, 41.9M free. May 13 23:47:26.827393 systemd[1]: Queued start job for default target multi-user.target. May 13 23:47:26.838000 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:47:26.838452 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:47:27.089175 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:47:27.091153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:47:27.091392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:47:27.092922 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:47:27.093138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:47:27.094585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:47:27.094803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:47:27.096393 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:47:27.096627 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:47:27.098132 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:47:27.098365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:47:27.100025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:47:27.101607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:47:27.103526 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:47:27.105687 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:47:27.122929 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:47:27.126424 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:47:27.129174 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:47:27.130547 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:47:27.130600 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:47:27.132794 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:47:27.142426 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:47:27.144762 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:47:27.145998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:47:27.147516 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:47:27.151358 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:47:27.152696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:47:27.154592 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:47:27.155747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:47:27.158390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:47:27.163359 systemd-journald[1143]: Time spent on flushing to /var/log/journal/892e6d30d51742f1a14bc99a0ab72fdd is 15.269ms for 1029 entries. May 13 23:47:27.163359 systemd-journald[1143]: System Journal (/var/log/journal/892e6d30d51742f1a14bc99a0ab72fdd) is 8M, max 195.6M, 187.6M free. May 13 23:47:27.189234 systemd-journald[1143]: Received client request to flush runtime journal. May 13 23:47:27.189270 kernel: loop0: detected capacity change from 0 to 151640 May 13 23:47:27.162731 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:47:27.166002 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:47:27.172502 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:27.174109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:47:27.175503 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:47:27.177056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:47:27.178859 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:47:27.185655 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:47:27.190336 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:47:27.193351 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:47:27.195119 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:47:27.197150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:27.213152 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:47:27.220181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:47:27.223854 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:47:27.225616 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:47:27.230373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:47:27.248198 kernel: loop1: detected capacity change from 0 to 109808 May 13 23:47:27.261145 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. May 13 23:47:27.261176 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. May 13 23:47:27.267426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:27.281203 kernel: loop2: detected capacity change from 0 to 218376 May 13 23:47:27.315190 kernel: loop3: detected capacity change from 0 to 151640 May 13 23:47:27.330181 kernel: loop4: detected capacity change from 0 to 109808 May 13 23:47:27.341184 kernel: loop5: detected capacity change from 0 to 218376 May 13 23:47:27.348565 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:47:27.349374 (sd-merge)[1210]: Merged extensions into '/usr'. May 13 23:47:27.356280 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:47:27.356298 systemd[1]: Reloading... May 13 23:47:27.428187 zram_generator::config[1237]: No configuration found. May 13 23:47:27.491273 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:47:27.571775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:27.637181 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:47:27.637892 systemd[1]: Reloading finished in 281 ms. May 13 23:47:27.657460 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:47:27.659056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:47:27.674734 systemd[1]: Starting ensure-sysext.service... May 13 23:47:27.676962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:47:27.692177 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... May 13 23:47:27.692203 systemd[1]: Reloading... May 13 23:47:27.700237 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:47:27.700521 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:47:27.701469 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:47:27.701739 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 13 23:47:27.701809 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 13 23:47:27.705939 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:47:27.705953 systemd-tmpfiles[1276]: Skipping /boot May 13 23:47:27.720656 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:47:27.720671 systemd-tmpfiles[1276]: Skipping /boot May 13 23:47:27.750237 zram_generator::config[1308]: No configuration found. May 13 23:47:27.857274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:27.922601 systemd[1]: Reloading finished in 230 ms. May 13 23:47:27.935000 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:47:27.956139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:27.965284 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:47:27.968016 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:47:27.978390 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:47:27.982263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:47:27.988483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:27.991415 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:47:27.998311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:27.998495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:47:28.000096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:47:28.002804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:47:28.011454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:47:28.012771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:47:28.012907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:47:28.015356 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:47:28.017088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:28.018743 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:47:28.020655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:47:28.021006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:47:28.023264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:47:28.023473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:47:28.025417 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:47:28.025635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:47:28.034032 augenrules[1375]: No rules May 13 23:47:28.034106 systemd-udevd[1352]: Using default interface naming scheme 'v255'. May 13 23:47:28.037023 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:47:28.037322 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:47:28.039735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:28.040076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:47:28.041795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:47:28.044286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:47:28.051785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:47:28.057373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:47:28.058725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:47:28.058868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:47:28.060916 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:47:28.062500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:47:28.064356 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:47:28.067049 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:47:28.068982 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:47:28.087463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:28.089869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:47:28.090545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:47:28.092647 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:47:28.092912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:47:28.097419 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:47:28.098475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:47:28.100680 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:47:28.100888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:47:28.108338 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:47:28.110496 systemd[1]: Finished ensure-sysext.service. May 13 23:47:28.130663 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:47:28.135256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:47:28.135350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:47:28.137842 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:47:28.142357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:47:28.144875 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:47:28.181189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1409) May 13 23:47:28.183954 systemd-resolved[1348]: Positive Trust Anchors: May 13 23:47:28.183974 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:47:28.184015 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:47:28.190978 systemd-resolved[1348]: Defaulting to hostname 'linux'. May 13 23:47:28.194464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:47:28.198575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:28.224757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:47:28.230341 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:47:28.228976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:47:28.243184 kernel: ACPI: button: Power Button [PWRF] May 13 23:47:28.249469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:47:28.258521 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:47:28.265586 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:47:28.268201 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:47:28.271188 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 23:47:28.271511 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 23:47:28.273269 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 23:47:28.273600 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 23:47:28.273753 systemd-networkd[1420]: lo: Link UP May 13 23:47:28.273757 systemd-networkd[1420]: lo: Gained carrier May 13 23:47:28.275922 systemd-networkd[1420]: Enumeration completed May 13 23:47:28.275997 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:47:28.276835 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:28.276840 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:47:28.277606 systemd[1]: Reached target network.target - Network. May 13 23:47:28.277755 systemd-networkd[1420]: eth0: Link UP May 13 23:47:28.277759 systemd-networkd[1420]: eth0: Gained carrier May 13 23:47:28.277772 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:28.281210 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:47:28.284895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:47:28.294271 systemd-networkd[1420]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:47:28.296841 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. May 13 23:47:29.206049 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:47:29.206092 systemd-timesyncd[1421]: Initial clock synchronization to Tue 2025-05-13 23:47:29.205973 UTC. May 13 23:47:29.206123 systemd-resolved[1348]: Clock change detected. Flushing caches. May 13 23:47:29.224565 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:47:29.232610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:29.288534 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:47:29.300696 kernel: kvm_amd: TSC scaling supported May 13 23:47:29.300734 kernel: kvm_amd: Nested Virtualization enabled May 13 23:47:29.300748 kernel: kvm_amd: Nested Paging enabled May 13 23:47:29.301686 kernel: kvm_amd: LBR virtualization supported May 13 23:47:29.301702 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 23:47:29.302715 kernel: kvm_amd: Virtual GIF supported May 13 23:47:29.324517 kernel: EDAC MC: Ver: 3.0.0 May 13 23:47:29.349362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:29.357678 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:47:29.360542 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:47:29.380930 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:47:29.411820 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:47:29.413450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:29.414625 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:47:29.415839 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:47:29.417148 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:47:29.418627 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:47:29.419888 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:47:29.421170 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:47:29.422446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:47:29.422480 systemd[1]: Reached target paths.target - Path Units. May 13 23:47:29.423594 systemd[1]: Reached target timers.target - Timer Units. May 13 23:47:29.425513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:47:29.428745 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:47:29.433179 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:47:29.434952 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:47:29.436551 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:47:29.440934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:47:29.442504 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:47:29.445188 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:47:29.446940 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:47:29.448389 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:47:29.449597 systemd[1]: Reached target basic.target - Basic System. May 13 23:47:29.450857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:47:29.450892 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:47:29.452113 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:47:29.454308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:47:29.455425 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:47:29.458501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:47:29.460683 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:47:29.461844 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:47:29.462856 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:47:29.466701 jq[1458]: false May 13 23:47:29.466365 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:47:29.470600 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:47:29.474611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:47:29.481971 extend-filesystems[1459]: Found loop3 May 13 23:47:29.491614 extend-filesystems[1459]: Found loop4 May 13 23:47:29.491614 extend-filesystems[1459]: Found loop5 May 13 23:47:29.491614 extend-filesystems[1459]: Found sr0 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda May 13 23:47:29.491614 extend-filesystems[1459]: Found vda1 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda2 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda3 May 13 23:47:29.491614 extend-filesystems[1459]: Found usr May 13 23:47:29.491614 extend-filesystems[1459]: Found vda4 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda6 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda7 May 13 23:47:29.491614 extend-filesystems[1459]: Found vda9 May 13 23:47:29.491614 extend-filesystems[1459]: Checking size of /dev/vda9 May 13 23:47:29.491614 extend-filesystems[1459]: Resized partition /dev/vda9 May 13 23:47:29.508630 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:47:29.500105 dbus-daemon[1457]: [system] SELinux support is enabled May 13 23:47:29.515749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1396) May 13 23:47:29.491952 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:47:29.515869 extend-filesystems[1470]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:47:29.495855 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:47:29.496454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:47:29.500521 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:47:29.515635 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:47:29.519168 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:47:29.524928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:47:29.528516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:47:29.528752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:47:29.531103 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:47:29.531353 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:47:29.533418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:47:29.533754 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:47:29.535427 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:47:29.562645 update_engine[1471]: I20250513 23:47:29.559726 1471 main.cc:92] Flatcar Update Engine starting May 13 23:47:29.562645 update_engine[1471]: I20250513 23:47:29.562333 1471 update_check_scheduler.cc:74] Next update check in 4m31s May 13 23:47:29.562918 jq[1480]: true May 13 23:47:29.562935 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:47:29.564728 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:47:29.564728 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:47:29.564728 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:47:29.568731 extend-filesystems[1459]: Resized filesystem in /dev/vda9 May 13 23:47:29.565676 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:47:29.565940 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:47:29.586050 jq[1490]: true May 13 23:47:29.587619 tar[1482]: linux-amd64/LICENSE May 13 23:47:29.589181 tar[1482]: linux-amd64/helm May 13 23:47:29.590994 systemd[1]: Started update-engine.service - Update Engine. May 13 23:47:29.591217 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:47:29.591254 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:47:29.593373 systemd-logind[1469]: New seat seat0. May 13 23:47:29.594025 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:47:29.594321 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:47:29.599256 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:47:29.599276 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:47:29.602620 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:47:29.606646 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:47:29.647079 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:47:29.654061 bash[1513]: Updated "/home/core/.ssh/authorized_keys" May 13 23:47:29.655807 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:47:29.657945 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:47:29.746543 containerd[1489]: time="2025-05-13T23:47:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:47:29.748316 containerd[1489]: time="2025-05-13T23:47:29.748276553Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:47:29.751552 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:47:29.760558 containerd[1489]: time="2025-05-13T23:47:29.760499128Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.432µs" May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.760674297Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.760701638Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.760936288Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.760952008Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.760977836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761040784Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761051675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761319016Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761331861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761341799Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761349995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:47:29.761461 containerd[1489]: time="2025-05-13T23:47:29.761462726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:47:29.761903 containerd[1489]: time="2025-05-13T23:47:29.761711333Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:47:29.761903 containerd[1489]: time="2025-05-13T23:47:29.761740527Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:47:29.761903 containerd[1489]: time="2025-05-13T23:47:29.761750135Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:47:29.761903 containerd[1489]: time="2025-05-13T23:47:29.761791583Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:47:29.762090 containerd[1489]: time="2025-05-13T23:47:29.762043917Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:47:29.762172 containerd[1489]: time="2025-05-13T23:47:29.762148172Z" level=info msg="metadata content store policy set" policy=shared May 13 23:47:29.769524 containerd[1489]: time="2025-05-13T23:47:29.769468959Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769530485Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769546555Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769558577Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769571782Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769581771Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:47:29.769595 containerd[1489]: time="2025-05-13T23:47:29.769592882Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:47:29.769772 containerd[1489]: time="2025-05-13T23:47:29.769604834Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:47:29.769772 containerd[1489]: time="2025-05-13T23:47:29.769616075Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:47:29.769772 containerd[1489]: time="2025-05-13T23:47:29.769627066Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:47:29.769772 containerd[1489]: time="2025-05-13T23:47:29.769644489Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:47:29.769772 containerd[1489]: time="2025-05-13T23:47:29.769659557Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:47:29.769896 containerd[1489]: time="2025-05-13T23:47:29.769813486Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:47:29.769896 containerd[1489]: time="2025-05-13T23:47:29.769838954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:47:29.769896 containerd[1489]: time="2025-05-13T23:47:29.769857378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:47:29.769896 containerd[1489]: time="2025-05-13T23:47:29.769873609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:47:29.769896 containerd[1489]: time="2025-05-13T23:47:29.769888987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769905268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769924264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769936697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769949631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769960622Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:47:29.770022 containerd[1489]: time="2025-05-13T23:47:29.769980409Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:47:29.770178 containerd[1489]: time="2025-05-13T23:47:29.770053186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:47:29.770178 containerd[1489]: time="2025-05-13T23:47:29.770067042Z" level=info msg="Start snapshots syncer" May 13 23:47:29.770178 containerd[1489]: time="2025-05-13T23:47:29.770096647Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:47:29.770377 containerd[1489]: time="2025-05-13T23:47:29.770329354Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:47:29.770527 containerd[1489]: time="2025-05-13T23:47:29.770380650Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:47:29.770527 containerd[1489]: time="2025-05-13T23:47:29.770484405Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770597968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770627413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770642171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770655355Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770669622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770681064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770691684Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770718063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770734404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:47:29.770779 containerd[1489]: time="2025-05-13T23:47:29.770748480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770798314Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770819714Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770832939Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770855311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770866882Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770879696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770893172Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770913881Z" level=info msg="runtime interface created" May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770921104Z" level=info msg="created NRI interface" May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770932185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770957032Z" level=info msg="Connect containerd service" May 13 23:47:29.771039 containerd[1489]: time="2025-05-13T23:47:29.770985605Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:47:29.773200 containerd[1489]: time="2025-05-13T23:47:29.772747682Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:47:29.782288 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:47:29.786045 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:47:29.802513 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:47:29.802874 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:47:29.807244 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:47:29.828615 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:47:29.832705 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:47:29.838815 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:47:29.840235 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:47:29.867330 containerd[1489]: time="2025-05-13T23:47:29.867266229Z" level=info msg="Start subscribing containerd event" May 13 23:47:29.867630 containerd[1489]: time="2025-05-13T23:47:29.867539381Z" level=info msg="Start recovering state" May 13 23:47:29.867835 containerd[1489]: time="2025-05-13T23:47:29.867420278Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:47:29.868054 containerd[1489]: time="2025-05-13T23:47:29.868034360Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:47:29.868240 containerd[1489]: time="2025-05-13T23:47:29.868182578Z" level=info msg="Start event monitor" May 13 23:47:29.868330 containerd[1489]: time="2025-05-13T23:47:29.868314786Z" level=info msg="Start cni network conf syncer for default" May 13 23:47:29.869723 containerd[1489]: time="2025-05-13T23:47:29.869701769Z" level=info msg="Start streaming server" May 13 23:47:29.869795 containerd[1489]: time="2025-05-13T23:47:29.869728539Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:47:29.869795 containerd[1489]: time="2025-05-13T23:47:29.869738508Z" level=info msg="runtime interface starting up..." May 13 23:47:29.869795 containerd[1489]: time="2025-05-13T23:47:29.869749078Z" level=info msg="starting plugins..." May 13 23:47:29.869795 containerd[1489]: time="2025-05-13T23:47:29.869777421Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:47:29.870005 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:47:29.871982 containerd[1489]: time="2025-05-13T23:47:29.871321859Z" level=info msg="containerd successfully booted in 0.125510s" May 13 23:47:30.016139 tar[1482]: linux-amd64/README.md May 13 23:47:30.042234 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:47:30.660574 systemd-networkd[1420]: eth0: Gained IPv6LL May 13 23:47:30.664030 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:47:30.665951 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:47:30.668620 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:47:30.671240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:30.686914 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:47:30.710803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:47:30.712630 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:47:30.712882 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:47:30.715239 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:47:31.658025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:31.659824 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:47:31.661210 systemd[1]: Startup finished in 762ms (kernel) + 7.561s (initrd) + 4.514s (userspace) = 12.837s. May 13 23:47:31.662057 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:32.204319 kubelet[1584]: E0513 23:47:32.204249 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:32.208182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:32.208373 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:32.208745 systemd[1]: kubelet.service: Consumed 1.391s CPU time, 255.4M memory peak. May 13 23:47:32.501975 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:47:32.503347 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). May 13 23:47:32.564393 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:32.566120 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:32.576147 systemd-logind[1469]: New session 1 of user core. May 13 23:47:32.577393 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:47:32.578622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:47:32.610295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:47:32.612743 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:47:32.635329 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:47:32.637331 systemd-logind[1469]: New session c1 of user core. May 13 23:47:32.775570 systemd[1602]: Queued start job for default target default.target. May 13 23:47:32.785666 systemd[1602]: Created slice app.slice - User Application Slice. May 13 23:47:32.785697 systemd[1602]: Reached target paths.target - Paths. May 13 23:47:32.785736 systemd[1602]: Reached target timers.target - Timers. May 13 23:47:32.787141 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:47:32.798647 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:47:32.798770 systemd[1602]: Reached target sockets.target - Sockets. May 13 23:47:32.798810 systemd[1602]: Reached target basic.target - Basic System. May 13 23:47:32.798850 systemd[1602]: Reached target default.target - Main User Target. May 13 23:47:32.798879 systemd[1602]: Startup finished in 155ms. May 13 23:47:32.799247 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:47:32.801007 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:47:32.861140 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:59176.service - OpenSSH per-connection server daemon (10.0.0.1:59176). May 13 23:47:32.917321 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 59176 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:32.918663 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:32.922585 systemd-logind[1469]: New session 2 of user core. May 13 23:47:32.932521 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:47:32.984121 sshd[1615]: Connection closed by 10.0.0.1 port 59176 May 13 23:47:32.984423 sshd-session[1613]: pam_unix(sshd:session): session closed for user core May 13 23:47:32.995894 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:59176.service: Deactivated successfully. May 13 23:47:32.997587 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:47:32.999021 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. May 13 23:47:33.000368 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:59190.service - OpenSSH per-connection server daemon (10.0.0.1:59190). May 13 23:47:33.001344 systemd-logind[1469]: Removed session 2. May 13 23:47:33.061015 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 59190 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:33.062330 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.066232 systemd-logind[1469]: New session 3 of user core. May 13 23:47:33.078533 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:47:33.126916 sshd[1623]: Connection closed by 10.0.0.1 port 59190 May 13 23:47:33.127180 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 13 23:47:33.144034 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:59190.service: Deactivated successfully. May 13 23:47:33.145645 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:47:33.146944 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. May 13 23:47:33.148241 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:59206.service - OpenSSH per-connection server daemon (10.0.0.1:59206). May 13 23:47:33.149044 systemd-logind[1469]: Removed session 3. May 13 23:47:33.202546 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 59206 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:33.203862 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.207727 systemd-logind[1469]: New session 4 of user core. May 13 23:47:33.217533 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:47:33.269062 sshd[1631]: Connection closed by 10.0.0.1 port 59206 May 13 23:47:33.269293 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 13 23:47:33.281933 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:59206.service: Deactivated successfully. May 13 23:47:33.283567 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:47:33.284925 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. May 13 23:47:33.286151 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:59222.service - OpenSSH per-connection server daemon (10.0.0.1:59222). May 13 23:47:33.287005 systemd-logind[1469]: Removed session 4. May 13 23:47:33.332371 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 59222 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:33.333755 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.337711 systemd-logind[1469]: New session 5 of user core. May 13 23:47:33.351533 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:47:33.407619 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:47:33.407936 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:33.426151 sudo[1640]: pam_unix(sudo:session): session closed for user root May 13 23:47:33.427536 sshd[1639]: Connection closed by 10.0.0.1 port 59222 May 13 23:47:33.427921 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 13 23:47:33.437902 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:59222.service: Deactivated successfully. May 13 23:47:33.439745 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:47:33.441105 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. May 13 23:47:33.442565 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:59226.service - OpenSSH per-connection server daemon (10.0.0.1:59226). May 13 23:47:33.443350 systemd-logind[1469]: Removed session 5. May 13 23:47:33.490113 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 59226 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:33.491416 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.495300 systemd-logind[1469]: New session 6 of user core. May 13 23:47:33.506522 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:47:33.559336 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:47:33.559665 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:33.563137 sudo[1650]: pam_unix(sudo:session): session closed for user root May 13 23:47:33.568837 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:47:33.569152 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:33.579023 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:47:33.620473 augenrules[1672]: No rules May 13 23:47:33.621393 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:47:33.621719 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:47:33.622774 sudo[1649]: pam_unix(sudo:session): session closed for user root May 13 23:47:33.624266 sshd[1648]: Connection closed by 10.0.0.1 port 59226 May 13 23:47:33.624533 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 13 23:47:33.636071 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:59226.service: Deactivated successfully. May 13 23:47:33.637742 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:47:33.639071 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. May 13 23:47:33.640363 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:59240.service - OpenSSH per-connection server daemon (10.0.0.1:59240). May 13 23:47:33.641380 systemd-logind[1469]: Removed session 6. May 13 23:47:33.692153 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 59240 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:47:33.693742 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:33.697617 systemd-logind[1469]: New session 7 of user core. May 13 23:47:33.707602 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:47:33.760792 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:47:33.761114 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:34.195024 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:47:34.212763 (dockerd)[1705]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:47:34.591192 dockerd[1705]: time="2025-05-13T23:47:34.591047205Z" level=info msg="Starting up" May 13 23:47:34.595310 dockerd[1705]: time="2025-05-13T23:47:34.595262695Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:47:35.135145 dockerd[1705]: time="2025-05-13T23:47:35.135094340Z" level=info msg="Loading containers: start." May 13 23:47:35.306433 kernel: Initializing XFRM netlink socket May 13 23:47:35.381086 systemd-networkd[1420]: docker0: Link UP May 13 23:47:35.451818 dockerd[1705]: time="2025-05-13T23:47:35.451768948Z" level=info msg="Loading containers: done." May 13 23:47:35.474028 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1392562812-merged.mount: Deactivated successfully. May 13 23:47:35.476005 dockerd[1705]: time="2025-05-13T23:47:35.475965310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:47:35.476070 dockerd[1705]: time="2025-05-13T23:47:35.476048336Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:47:35.476178 dockerd[1705]: time="2025-05-13T23:47:35.476159495Z" level=info msg="Daemon has completed initialization" May 13 23:47:35.512056 dockerd[1705]: time="2025-05-13T23:47:35.511995849Z" level=info msg="API listen on /run/docker.sock" May 13 23:47:35.512119 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:47:36.852099 containerd[1489]: time="2025-05-13T23:47:36.852048822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:47:37.609062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824553278.mount: Deactivated successfully. May 13 23:47:38.454748 containerd[1489]: time="2025-05-13T23:47:38.454685395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.455462 containerd[1489]: time="2025-05-13T23:47:38.455338410Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 23:47:38.456500 containerd[1489]: time="2025-05-13T23:47:38.456465365Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.458886 containerd[1489]: time="2025-05-13T23:47:38.458851853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.459808 containerd[1489]: time="2025-05-13T23:47:38.459779343Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.607694023s" May 13 23:47:38.459847 containerd[1489]: time="2025-05-13T23:47:38.459815050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 23:47:38.460381 containerd[1489]: time="2025-05-13T23:47:38.460355444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:47:39.527877 containerd[1489]: time="2025-05-13T23:47:39.527827900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:39.528556 containerd[1489]: time="2025-05-13T23:47:39.528483720Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 23:47:39.529753 containerd[1489]: time="2025-05-13T23:47:39.529698350Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:39.531877 containerd[1489]: time="2025-05-13T23:47:39.531835289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:39.532873 containerd[1489]: time="2025-05-13T23:47:39.532833252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.072448372s" May 13 23:47:39.532932 containerd[1489]: time="2025-05-13T23:47:39.532877104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 23:47:39.533444 containerd[1489]: time="2025-05-13T23:47:39.533360832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:47:40.716885 containerd[1489]: time="2025-05-13T23:47:40.716831790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:40.717779 containerd[1489]: time="2025-05-13T23:47:40.717692736Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 23:47:40.718928 containerd[1489]: time="2025-05-13T23:47:40.718898438Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:40.721360 containerd[1489]: time="2025-05-13T23:47:40.721324370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:40.722197 containerd[1489]: time="2025-05-13T23:47:40.722165799Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.188780602s" May 13 23:47:40.722197 containerd[1489]: time="2025-05-13T23:47:40.722194283Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 23:47:40.722664 containerd[1489]: time="2025-05-13T23:47:40.722633567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:47:41.692072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116079705.mount: Deactivated successfully. May 13 23:47:42.320808 containerd[1489]: time="2025-05-13T23:47:42.320726113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.321715 containerd[1489]: time="2025-05-13T23:47:42.321623347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 23:47:42.322746 containerd[1489]: time="2025-05-13T23:47:42.322713122Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.324652 containerd[1489]: time="2025-05-13T23:47:42.324613748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.325095 containerd[1489]: time="2025-05-13T23:47:42.325059284Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.602384499s" May 13 23:47:42.325121 containerd[1489]: time="2025-05-13T23:47:42.325097065Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 23:47:42.325620 containerd[1489]: time="2025-05-13T23:47:42.325602463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:47:42.397092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:47:42.398710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:42.593457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:42.615767 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:42.789346 kubelet[1990]: E0513 23:47:42.789309 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:42.795515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:42.795712 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:42.796045 systemd[1]: kubelet.service: Consumed 349ms CPU time, 104.5M memory peak. May 13 23:47:43.185493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733028465.mount: Deactivated successfully. May 13 23:47:43.850741 containerd[1489]: time="2025-05-13T23:47:43.850660094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.851418 containerd[1489]: time="2025-05-13T23:47:43.851358976Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 23:47:43.852632 containerd[1489]: time="2025-05-13T23:47:43.852581620Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.855160 containerd[1489]: time="2025-05-13T23:47:43.855122277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.855969 containerd[1489]: time="2025-05-13T23:47:43.855934652Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.530306981s" May 13 23:47:43.855969 containerd[1489]: time="2025-05-13T23:47:43.855967934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 23:47:43.856530 containerd[1489]: time="2025-05-13T23:47:43.856494853Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:47:44.300364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700614710.mount: Deactivated successfully. May 13 23:47:44.307350 containerd[1489]: time="2025-05-13T23:47:44.307277304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:44.308028 containerd[1489]: time="2025-05-13T23:47:44.307962329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:47:44.309186 containerd[1489]: time="2025-05-13T23:47:44.309152713Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:44.311458 containerd[1489]: time="2025-05-13T23:47:44.311374953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:44.311899 containerd[1489]: time="2025-05-13T23:47:44.311864952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.333721ms" May 13 23:47:44.311899 containerd[1489]: time="2025-05-13T23:47:44.311892564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:47:44.312386 containerd[1489]: time="2025-05-13T23:47:44.312361173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:47:44.832903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222347360.mount: Deactivated successfully. May 13 23:47:46.452877 containerd[1489]: time="2025-05-13T23:47:46.452824720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:46.453770 containerd[1489]: time="2025-05-13T23:47:46.453716032Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 23:47:46.455001 containerd[1489]: time="2025-05-13T23:47:46.454968102Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:46.457765 containerd[1489]: time="2025-05-13T23:47:46.457738240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:46.458616 containerd[1489]: time="2025-05-13T23:47:46.458574348Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.1461839s" May 13 23:47:46.458665 containerd[1489]: time="2025-05-13T23:47:46.458617319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 23:47:49.170805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:49.170966 systemd[1]: kubelet.service: Consumed 349ms CPU time, 104.5M memory peak. May 13 23:47:49.173109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:49.200485 systemd[1]: Reload requested from client PID 2138 ('systemctl') (unit session-7.scope)... May 13 23:47:49.200498 systemd[1]: Reloading... May 13 23:47:49.288436 zram_generator::config[2184]: No configuration found. May 13 23:47:49.468614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:49.570017 systemd[1]: Reloading finished in 369 ms. May 13 23:47:49.629440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:49.632624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:49.634417 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:49.634679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:49.634713 systemd[1]: kubelet.service: Consumed 151ms CPU time, 92M memory peak. May 13 23:47:49.636359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:49.821036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:49.824802 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:49.859039 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:49.859039 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:47:49.859039 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:49.859534 kubelet[2232]: I0513 23:47:49.859239 2232 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:50.093117 kubelet[2232]: I0513 23:47:50.093017 2232 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:47:50.093117 kubelet[2232]: I0513 23:47:50.093044 2232 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:50.093304 kubelet[2232]: I0513 23:47:50.093279 2232 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:47:50.113490 kubelet[2232]: E0513 23:47:50.113464 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:50.113984 kubelet[2232]: I0513 23:47:50.113964 2232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:50.122000 kubelet[2232]: I0513 23:47:50.121975 2232 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:47:50.126888 kubelet[2232]: I0513 23:47:50.126863 2232 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:50.127941 kubelet[2232]: I0513 23:47:50.127894 2232 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:50.128100 kubelet[2232]: I0513 23:47:50.127929 2232 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:47:50.128100 kubelet[2232]: I0513 23:47:50.128099 2232 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:50.128272 kubelet[2232]: I0513 23:47:50.128111 2232 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:47:50.128272 kubelet[2232]: I0513 23:47:50.128251 2232 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:50.130777 kubelet[2232]: I0513 23:47:50.130743 2232 kubelet.go:446] "Attempting to sync node with API server" May 13 23:47:50.130777 kubelet[2232]: I0513 23:47:50.130763 2232 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:50.130861 kubelet[2232]: I0513 23:47:50.130788 2232 kubelet.go:352] "Adding apiserver pod source" May 13 23:47:50.130861 kubelet[2232]: I0513 23:47:50.130802 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:50.133265 kubelet[2232]: I0513 23:47:50.133242 2232 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:47:50.134260 kubelet[2232]: I0513 23:47:50.133639 2232 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:50.135594 kubelet[2232]: W0513 23:47:50.134865 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:47:50.137454 kubelet[2232]: I0513 23:47:50.137260 2232 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:47:50.137454 kubelet[2232]: I0513 23:47:50.137304 2232 server.go:1287] "Started kubelet" May 13 23:47:50.139207 kubelet[2232]: I0513 23:47:50.139173 2232 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:50.139823 kubelet[2232]: W0513 23:47:50.139627 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:50.139823 kubelet[2232]: E0513 23:47:50.139668 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:50.139823 kubelet[2232]: W0513 23:47:50.139721 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:50.139823 kubelet[2232]: E0513 23:47:50.139749 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:50.139969 kubelet[2232]: I0513 23:47:50.139830 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:50.140572 kubelet[2232]: I0513 23:47:50.140067 2232 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:50.140572 kubelet[2232]: I0513 23:47:50.140116 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:50.140572 kubelet[2232]: I0513 23:47:50.140247 2232 server.go:490] "Adding debug handlers to kubelet server" May 13 23:47:50.140572 kubelet[2232]: I0513 23:47:50.140298 2232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:47:50.141259 kubelet[2232]: I0513 23:47:50.141124 2232 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:47:50.141259 kubelet[2232]: I0513 23:47:50.141229 2232 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:50.141320 kubelet[2232]: I0513 23:47:50.141271 2232 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:50.141603 kubelet[2232]: W0513 23:47:50.141567 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:50.141658 kubelet[2232]: E0513 23:47:50.141613 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:50.142193 kubelet[2232]: E0513 23:47:50.141801 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:50.142193 kubelet[2232]: E0513 23:47:50.141891 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" May 13 23:47:50.142433 kubelet[2232]: I0513 23:47:50.142416 2232 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:50.142514 kubelet[2232]: I0513 23:47:50.142498 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:50.145418 kubelet[2232]: E0513 23:47:50.142787 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b044f2caff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:47:50.137278453 +0000 UTC m=+0.308610977,LastTimestamp:2025-05-13 23:47:50.137278453 +0000 UTC m=+0.308610977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:47:50.147242 kubelet[2232]: I0513 23:47:50.147205 2232 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:50.151171 kubelet[2232]: E0513 23:47:50.151141 2232 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:50.161806 kubelet[2232]: I0513 23:47:50.161778 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:50.162952 kubelet[2232]: I0513 23:47:50.162549 2232 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:47:50.162952 kubelet[2232]: I0513 23:47:50.162563 2232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:47:50.162952 kubelet[2232]: I0513 23:47:50.162581 2232 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:50.163340 kubelet[2232]: I0513 23:47:50.163319 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:50.163800 kubelet[2232]: I0513 23:47:50.163481 2232 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:47:50.163800 kubelet[2232]: I0513 23:47:50.163510 2232 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:47:50.163800 kubelet[2232]: I0513 23:47:50.163518 2232 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:47:50.163800 kubelet[2232]: E0513 23:47:50.163589 2232 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:50.164171 kubelet[2232]: W0513 23:47:50.164100 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:50.164171 kubelet[2232]: E0513 23:47:50.164158 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:50.242497 kubelet[2232]: E0513 23:47:50.242460 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:50.264625 kubelet[2232]: E0513 23:47:50.264603 2232 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:47:50.342435 kubelet[2232]: E0513 23:47:50.342391 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" May 13 23:47:50.343502 kubelet[2232]: E0513 23:47:50.343421 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:50.443902 kubelet[2232]: E0513 23:47:50.443858 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:50.465031 kubelet[2232]: E0513 23:47:50.464997 2232 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:47:50.544385 kubelet[2232]: E0513 23:47:50.544343 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:50.556675 kubelet[2232]: I0513 23:47:50.556635 2232 policy_none.go:49] "None policy: Start" May 13 23:47:50.556675 kubelet[2232]: I0513 23:47:50.556658 2232 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:47:50.556675 kubelet[2232]: I0513 23:47:50.556672 2232 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:50.568674 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:47:50.584656 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:47:50.587375 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:47:50.603488 kubelet[2232]: I0513 23:47:50.603142 2232 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:50.603488 kubelet[2232]: I0513 23:47:50.603352 2232 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:47:50.603488 kubelet[2232]: I0513 23:47:50.603363 2232 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:50.603612 kubelet[2232]: I0513 23:47:50.603597 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:50.604167 kubelet[2232]: E0513 23:47:50.604149 2232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:47:50.604230 kubelet[2232]: E0513 23:47:50.604194 2232 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:47:50.704971 kubelet[2232]: I0513 23:47:50.704926 2232 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:50.705288 kubelet[2232]: E0513 23:47:50.705242 2232 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 13 23:47:50.743703 kubelet[2232]: E0513 23:47:50.743666 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" May 13 23:47:50.872135 systemd[1]: Created slice kubepods-burstable-pod7468e5641cb4ccbfe6c315b3a5709c0d.slice - libcontainer container kubepods-burstable-pod7468e5641cb4ccbfe6c315b3a5709c0d.slice. May 13 23:47:50.899187 kubelet[2232]: E0513 23:47:50.899149 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:50.902443 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 23:47:50.904100 kubelet[2232]: E0513 23:47:50.904069 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:50.906938 kubelet[2232]: I0513 23:47:50.906922 2232 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:50.907238 kubelet[2232]: E0513 23:47:50.907211 2232 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 13 23:47:50.916189 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 23:47:50.917796 kubelet[2232]: E0513 23:47:50.917764 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:50.945315 kubelet[2232]: I0513 23:47:50.945254 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:50.945315 kubelet[2232]: I0513 23:47:50.945321 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:50.945557 kubelet[2232]: I0513 23:47:50.945343 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:50.945557 kubelet[2232]: I0513 23:47:50.945366 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:50.945557 kubelet[2232]: I0513 23:47:50.945389 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:50.945557 kubelet[2232]: I0513 23:47:50.945457 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:50.945557 kubelet[2232]: I0513 23:47:50.945497 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:50.945699 kubelet[2232]: I0513 23:47:50.945520 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:50.945699 kubelet[2232]: I0513 23:47:50.945546 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:51.199833 kubelet[2232]: E0513 23:47:51.199783 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.200605 containerd[1489]: time="2025-05-13T23:47:51.200545227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7468e5641cb4ccbfe6c315b3a5709c0d,Namespace:kube-system,Attempt:0,}" May 13 23:47:51.204768 kubelet[2232]: E0513 23:47:51.204742 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.205218 containerd[1489]: time="2025-05-13T23:47:51.205175695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 23:47:51.218475 kubelet[2232]: E0513 23:47:51.218449 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.218816 containerd[1489]: time="2025-05-13T23:47:51.218781977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 23:47:51.232746 containerd[1489]: time="2025-05-13T23:47:51.232697168Z" level=info msg="connecting to shim 0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7" address="unix:///run/containerd/s/94165892f7419611d1c5822339c35e986de3c6ae505406f82a42717d20b1484d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:51.234957 containerd[1489]: time="2025-05-13T23:47:51.234931710Z" level=info msg="connecting to shim e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41" address="unix:///run/containerd/s/6abc85534a8a5b7132f9ee90d6670f22ff7213e4aba49c85c53b164e61489862" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:51.254924 containerd[1489]: time="2025-05-13T23:47:51.254767100Z" level=info msg="connecting to shim 14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c" address="unix:///run/containerd/s/bb35296fb3783cfe8920e59a4cc5e3b795f5bae3eea572855925f9af414faf70" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:51.261596 systemd[1]: Started cri-containerd-0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7.scope - libcontainer container 0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7. May 13 23:47:51.265131 systemd[1]: Started cri-containerd-e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41.scope - libcontainer container e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41. May 13 23:47:51.282566 systemd[1]: Started cri-containerd-14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c.scope - libcontainer container 14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c. May 13 23:47:51.308859 kubelet[2232]: I0513 23:47:51.308818 2232 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:51.309153 kubelet[2232]: E0513 23:47:51.309121 2232 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 13 23:47:51.313684 containerd[1489]: time="2025-05-13T23:47:51.313622458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7468e5641cb4ccbfe6c315b3a5709c0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7\"" May 13 23:47:51.314810 kubelet[2232]: E0513 23:47:51.314790 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.318532 containerd[1489]: time="2025-05-13T23:47:51.318350340Z" level=info msg="CreateContainer within sandbox \"0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:47:51.323665 kubelet[2232]: W0513 23:47:51.323606 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:51.323784 kubelet[2232]: E0513 23:47:51.323667 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:51.325326 containerd[1489]: time="2025-05-13T23:47:51.325210101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41\"" May 13 23:47:51.326607 kubelet[2232]: E0513 23:47:51.326590 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.328164 containerd[1489]: time="2025-05-13T23:47:51.328137695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c\"" May 13 23:47:51.328515 containerd[1489]: time="2025-05-13T23:47:51.328191065Z" level=info msg="CreateContainer within sandbox \"e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:47:51.328792 kubelet[2232]: E0513 23:47:51.328702 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.329912 containerd[1489]: time="2025-05-13T23:47:51.329892057Z" level=info msg="CreateContainer within sandbox \"14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:47:51.332292 containerd[1489]: time="2025-05-13T23:47:51.332266713Z" level=info msg="Container 046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:51.340218 containerd[1489]: time="2025-05-13T23:47:51.340181875Z" level=info msg="Container 3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:51.340752 containerd[1489]: time="2025-05-13T23:47:51.340725685Z" level=info msg="CreateContainer within sandbox \"0f752a436320c8f25daefefb972f7293f4dadc3aab4a96ab8b93c50c0b87a4c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a\"" May 13 23:47:51.341255 containerd[1489]: time="2025-05-13T23:47:51.341216767Z" level=info msg="StartContainer for \"046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a\"" May 13 23:47:51.342373 containerd[1489]: time="2025-05-13T23:47:51.342225920Z" level=info msg="connecting to shim 046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a" address="unix:///run/containerd/s/94165892f7419611d1c5822339c35e986de3c6ae505406f82a42717d20b1484d" protocol=ttrpc version=3 May 13 23:47:51.343354 containerd[1489]: time="2025-05-13T23:47:51.343325263Z" level=info msg="Container bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:51.354947 containerd[1489]: time="2025-05-13T23:47:51.354894512Z" level=info msg="CreateContainer within sandbox \"14f1fb0843a897fa9adb2c28496171ec59d269dc81d93b3880f1982e177b788c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa\"" May 13 23:47:51.355387 containerd[1489]: time="2025-05-13T23:47:51.355349255Z" level=info msg="StartContainer for \"bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa\"" May 13 23:47:51.356195 kubelet[2232]: W0513 23:47:51.356077 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:51.356195 kubelet[2232]: E0513 23:47:51.356160 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:51.356877 containerd[1489]: time="2025-05-13T23:47:51.356840604Z" level=info msg="CreateContainer within sandbox \"e93ea7973734f2351341b1d2f0e9d2c7d245cc2a601c981e09d7be84a4b20d41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9\"" May 13 23:47:51.357419 containerd[1489]: time="2025-05-13T23:47:51.357322147Z" level=info msg="connecting to shim bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa" address="unix:///run/containerd/s/bb35296fb3783cfe8920e59a4cc5e3b795f5bae3eea572855925f9af414faf70" protocol=ttrpc version=3 May 13 23:47:51.357649 containerd[1489]: time="2025-05-13T23:47:51.357619555Z" level=info msg="StartContainer for \"3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9\"" May 13 23:47:51.358969 containerd[1489]: time="2025-05-13T23:47:51.358727695Z" level=info msg="connecting to shim 3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9" address="unix:///run/containerd/s/6abc85534a8a5b7132f9ee90d6670f22ff7213e4aba49c85c53b164e61489862" protocol=ttrpc version=3 May 13 23:47:51.365613 systemd[1]: Started cri-containerd-046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a.scope - libcontainer container 046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a. May 13 23:47:51.391625 systemd[1]: Started cri-containerd-3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9.scope - libcontainer container 3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9. May 13 23:47:51.393155 systemd[1]: Started cri-containerd-bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa.scope - libcontainer container bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa. May 13 23:47:51.405058 kubelet[2232]: W0513 23:47:51.404989 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:51.405211 kubelet[2232]: E0513 23:47:51.405187 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:51.430828 kubelet[2232]: W0513 23:47:51.430746 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 13 23:47:51.430828 kubelet[2232]: E0513 23:47:51.430817 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:51.436356 containerd[1489]: time="2025-05-13T23:47:51.436314421Z" level=info msg="StartContainer for \"046b12617e48f7d7a977d0c3f717947154b816523d5e8518ac583be75d91ac1a\" returns successfully" May 13 23:47:51.445884 containerd[1489]: time="2025-05-13T23:47:51.445845535Z" level=info msg="StartContainer for \"3a0207efbc626ba16218f0c20b9e90e0fdfa3ecabe364b1205698196fa52b4e9\" returns successfully" May 13 23:47:51.451575 containerd[1489]: time="2025-05-13T23:47:51.451437969Z" level=info msg="StartContainer for \"bb8d44e5643c349c2d5f0b1a1b8c9aff7581c3d242ead3786da20a70d3cf84aa\" returns successfully" May 13 23:47:52.111305 kubelet[2232]: I0513 23:47:52.111266 2232 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:52.172678 kubelet[2232]: E0513 23:47:52.172637 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:52.172840 kubelet[2232]: E0513 23:47:52.172773 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.176329 kubelet[2232]: E0513 23:47:52.176152 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:52.176329 kubelet[2232]: E0513 23:47:52.176185 2232 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:52.176329 kubelet[2232]: E0513 23:47:52.176258 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.176329 kubelet[2232]: E0513 23:47:52.176330 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.563003 kubelet[2232]: E0513 23:47:52.562948 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:47:52.649262 kubelet[2232]: I0513 23:47:52.648690 2232 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:47:52.649262 kubelet[2232]: E0513 23:47:52.648725 2232 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:47:52.657726 kubelet[2232]: E0513 23:47:52.657602 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:52.758010 kubelet[2232]: E0513 23:47:52.757960 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:52.858497 kubelet[2232]: E0513 23:47:52.858342 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:52.958998 kubelet[2232]: E0513 23:47:52.958933 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:53.059416 kubelet[2232]: E0513 23:47:53.059366 2232 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:53.140821 kubelet[2232]: I0513 23:47:53.140732 2232 apiserver.go:52] "Watching apiserver" May 13 23:47:53.142364 kubelet[2232]: I0513 23:47:53.142339 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:53.145966 kubelet[2232]: E0513 23:47:53.145946 2232 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:47:53.145966 kubelet[2232]: I0513 23:47:53.145965 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:53.146980 kubelet[2232]: E0513 23:47:53.146963 2232 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:53.146980 kubelet[2232]: I0513 23:47:53.146978 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:53.147920 kubelet[2232]: E0513 23:47:53.147900 2232 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 23:47:53.176711 kubelet[2232]: I0513 23:47:53.176681 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:53.176839 kubelet[2232]: I0513 23:47:53.176829 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:53.178014 kubelet[2232]: E0513 23:47:53.177980 2232 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 23:47:53.178132 kubelet[2232]: E0513 23:47:53.178115 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:53.178600 kubelet[2232]: E0513 23:47:53.178569 2232 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:47:53.178682 kubelet[2232]: E0513 23:47:53.178664 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:53.242089 kubelet[2232]: I0513 23:47:53.242058 2232 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:53.791458 kubelet[2232]: I0513 23:47:53.791429 2232 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:53.795453 kubelet[2232]: E0513 23:47:53.795428 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:54.177642 kubelet[2232]: E0513 23:47:54.177620 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:54.761340 systemd[1]: Reload requested from client PID 2508 ('systemctl') (unit session-7.scope)... May 13 23:47:54.761360 systemd[1]: Reloading... May 13 23:47:54.829441 zram_generator::config[2552]: No configuration found. May 13 23:47:54.942499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:55.059685 systemd[1]: Reloading finished in 297 ms. May 13 23:47:55.083828 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:55.101681 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:55.101982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:55.102034 systemd[1]: kubelet.service: Consumed 760ms CPU time, 129.1M memory peak. May 13 23:47:55.104151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:55.289773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:55.298784 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:55.336352 kubelet[2597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:55.336352 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:47:55.336352 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:55.336352 kubelet[2597]: I0513 23:47:55.336309 2597 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:55.343900 kubelet[2597]: I0513 23:47:55.343868 2597 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:47:55.343900 kubelet[2597]: I0513 23:47:55.343890 2597 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:55.344121 kubelet[2597]: I0513 23:47:55.344105 2597 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:47:55.345156 kubelet[2597]: I0513 23:47:55.345137 2597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:47:55.347695 kubelet[2597]: I0513 23:47:55.347670 2597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:55.351081 kubelet[2597]: I0513 23:47:55.351056 2597 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:47:55.356760 kubelet[2597]: I0513 23:47:55.356729 2597 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:55.357012 kubelet[2597]: I0513 23:47:55.356975 2597 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:55.357178 kubelet[2597]: I0513 23:47:55.357013 2597 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:47:55.357285 kubelet[2597]: I0513 23:47:55.357187 2597 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:55.357285 kubelet[2597]: I0513 23:47:55.357196 2597 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:47:55.357285 kubelet[2597]: I0513 23:47:55.357240 2597 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:55.357454 kubelet[2597]: I0513 23:47:55.357433 2597 kubelet.go:446] "Attempting to sync node with API server" May 13 23:47:55.357454 kubelet[2597]: I0513 23:47:55.357448 2597 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:55.357530 kubelet[2597]: I0513 23:47:55.357467 2597 kubelet.go:352] "Adding apiserver pod source" May 13 23:47:55.357530 kubelet[2597]: I0513 23:47:55.357477 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:55.358589 kubelet[2597]: I0513 23:47:55.358541 2597 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:47:55.361420 kubelet[2597]: I0513 23:47:55.359660 2597 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:55.361420 kubelet[2597]: I0513 23:47:55.360234 2597 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:47:55.361420 kubelet[2597]: I0513 23:47:55.360262 2597 server.go:1287] "Started kubelet" May 13 23:47:55.361420 kubelet[2597]: I0513 23:47:55.361176 2597 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:55.361420 kubelet[2597]: I0513 23:47:55.361203 2597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:55.361630 kubelet[2597]: I0513 23:47:55.361605 2597 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:55.362511 kubelet[2597]: I0513 23:47:55.362496 2597 server.go:490] "Adding debug handlers to kubelet server" May 13 23:47:55.369655 kubelet[2597]: I0513 23:47:55.369624 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:55.371441 kubelet[2597]: I0513 23:47:55.371422 2597 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:47:55.372187 kubelet[2597]: I0513 23:47:55.372163 2597 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:55.372513 kubelet[2597]: I0513 23:47:55.372491 2597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:55.373064 kubelet[2597]: I0513 23:47:55.372292 2597 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:55.373535 kubelet[2597]: I0513 23:47:55.372557 2597 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:55.373649 kubelet[2597]: I0513 23:47:55.372519 2597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:47:55.374353 kubelet[2597]: E0513 23:47:55.374329 2597 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:55.375758 kubelet[2597]: I0513 23:47:55.375739 2597 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:55.381819 kubelet[2597]: I0513 23:47:55.381690 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:55.383316 kubelet[2597]: I0513 23:47:55.383025 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:55.383316 kubelet[2597]: I0513 23:47:55.383044 2597 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:47:55.383316 kubelet[2597]: I0513 23:47:55.383061 2597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:47:55.383316 kubelet[2597]: I0513 23:47:55.383068 2597 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:47:55.383316 kubelet[2597]: E0513 23:47:55.383121 2597 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:55.410386 kubelet[2597]: I0513 23:47:55.410358 2597 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:47:55.410386 kubelet[2597]: I0513 23:47:55.410377 2597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:47:55.410386 kubelet[2597]: I0513 23:47:55.410396 2597 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:55.410590 kubelet[2597]: I0513 23:47:55.410542 2597 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:47:55.410590 kubelet[2597]: I0513 23:47:55.410552 2597 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:47:55.410590 kubelet[2597]: I0513 23:47:55.410567 2597 policy_none.go:49] "None policy: Start" May 13 23:47:55.410590 kubelet[2597]: I0513 23:47:55.410576 2597 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:47:55.410590 kubelet[2597]: I0513 23:47:55.410586 2597 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:55.410700 kubelet[2597]: I0513 23:47:55.410666 2597 state_mem.go:75] "Updated machine memory state" May 13 23:47:55.414183 kubelet[2597]: I0513 23:47:55.414118 2597 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:55.414335 kubelet[2597]: I0513 23:47:55.414313 2597 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:47:55.414380 kubelet[2597]: I0513 23:47:55.414333 2597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:55.414549 kubelet[2597]: I0513 23:47:55.414531 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:55.416092 kubelet[2597]: E0513 23:47:55.416018 2597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:47:55.484088 kubelet[2597]: I0513 23:47:55.484024 2597 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.484221 kubelet[2597]: I0513 23:47:55.484028 2597 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:55.484275 kubelet[2597]: I0513 23:47:55.484150 2597 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:55.489882 kubelet[2597]: E0513 23:47:55.489835 2597 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.521499 kubelet[2597]: I0513 23:47:55.521435 2597 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:55.526759 kubelet[2597]: I0513 23:47:55.526728 2597 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 23:47:55.526836 kubelet[2597]: I0513 23:47:55.526822 2597 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:47:55.574692 kubelet[2597]: I0513 23:47:55.574644 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:55.574692 kubelet[2597]: I0513 23:47:55.574696 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:55.574844 kubelet[2597]: I0513 23:47:55.574721 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.574844 kubelet[2597]: I0513 23:47:55.574747 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7468e5641cb4ccbfe6c315b3a5709c0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7468e5641cb4ccbfe6c315b3a5709c0d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:55.574844 kubelet[2597]: I0513 23:47:55.574777 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.574844 kubelet[2597]: I0513 23:47:55.574804 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.574973 kubelet[2597]: I0513 23:47:55.574914 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.574995 kubelet[2597]: I0513 23:47:55.574971 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:55.575033 kubelet[2597]: I0513 23:47:55.575001 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:55.764310 sudo[2635]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:47:55.764747 sudo[2635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:47:55.790821 kubelet[2597]: E0513 23:47:55.790705 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:55.790821 kubelet[2597]: E0513 23:47:55.790737 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:55.790821 kubelet[2597]: E0513 23:47:55.790763 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:56.229932 sudo[2635]: pam_unix(sudo:session): session closed for user root May 13 23:47:56.358209 kubelet[2597]: I0513 23:47:56.358149 2597 apiserver.go:52] "Watching apiserver" May 13 23:47:56.376060 kubelet[2597]: I0513 23:47:56.376011 2597 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:56.398998 kubelet[2597]: I0513 23:47:56.397944 2597 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:56.398998 kubelet[2597]: I0513 23:47:56.397944 2597 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:56.398998 kubelet[2597]: E0513 23:47:56.398377 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:56.407416 kubelet[2597]: E0513 23:47:56.405883 2597 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:47:56.407416 kubelet[2597]: E0513 23:47:56.406104 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:56.407416 kubelet[2597]: E0513 23:47:56.406340 2597 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:47:56.407416 kubelet[2597]: E0513 23:47:56.406457 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:56.435280 kubelet[2597]: I0513 23:47:56.435200 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.435180023 podStartE2EDuration="1.435180023s" podCreationTimestamp="2025-05-13 23:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:56.426888345 +0000 UTC m=+1.124111346" watchObservedRunningTime="2025-05-13 23:47:56.435180023 +0000 UTC m=+1.132403024" May 13 23:47:56.435513 kubelet[2597]: I0513 23:47:56.435312 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.4353081039999998 podStartE2EDuration="3.435308104s" podCreationTimestamp="2025-05-13 23:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:56.435032487 +0000 UTC m=+1.132255478" watchObservedRunningTime="2025-05-13 23:47:56.435308104 +0000 UTC m=+1.132531105" May 13 23:47:56.442071 kubelet[2597]: I0513 23:47:56.442012 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.441991144 podStartE2EDuration="1.441991144s" podCreationTimestamp="2025-05-13 23:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:56.441886528 +0000 UTC m=+1.139109519" watchObservedRunningTime="2025-05-13 23:47:56.441991144 +0000 UTC m=+1.139214145" May 13 23:47:57.399339 kubelet[2597]: E0513 23:47:57.399251 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:57.399339 kubelet[2597]: E0513 23:47:57.399295 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:57.619700 sudo[1684]: pam_unix(sudo:session): session closed for user root May 13 23:47:57.621303 sshd[1683]: Connection closed by 10.0.0.1 port 59240 May 13 23:47:57.621663 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 13 23:47:57.625104 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:59240.service: Deactivated successfully. May 13 23:47:57.627098 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:47:57.627304 systemd[1]: session-7.scope: Consumed 4.532s CPU time, 252M memory peak. May 13 23:47:57.628471 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. May 13 23:47:57.629316 systemd-logind[1469]: Removed session 7. May 13 23:47:58.396353 kubelet[2597]: E0513 23:47:58.396294 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:58.400638 kubelet[2597]: E0513 23:47:58.400606 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:00.216214 kubelet[2597]: I0513 23:48:00.216182 2597 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:48:00.216658 containerd[1489]: time="2025-05-13T23:48:00.216583576Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:48:00.216906 kubelet[2597]: I0513 23:48:00.216757 2597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:48:00.844315 systemd[1]: Created slice kubepods-besteffort-pod4d9100a3_3237_4f32_ae31_0c4464694f93.slice - libcontainer container kubepods-besteffort-pod4d9100a3_3237_4f32_ae31_0c4464694f93.slice. May 13 23:48:00.854789 systemd[1]: Created slice kubepods-burstable-pod6d90859b_f43a_479f_baf3_89c1b7de86d7.slice - libcontainer container kubepods-burstable-pod6d90859b_f43a_479f_baf3_89c1b7de86d7.slice. May 13 23:48:00.908110 kubelet[2597]: I0513 23:48:00.908046 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwf7g\" (UniqueName: \"kubernetes.io/projected/4d9100a3-3237-4f32-ae31-0c4464694f93-kube-api-access-hwf7g\") pod \"kube-proxy-f9np4\" (UID: \"4d9100a3-3237-4f32-ae31-0c4464694f93\") " pod="kube-system/kube-proxy-f9np4" May 13 23:48:00.908110 kubelet[2597]: I0513 23:48:00.908103 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-net\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908311 kubelet[2597]: I0513 23:48:00.908134 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-kernel\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908311 kubelet[2597]: I0513 23:48:00.908156 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d9100a3-3237-4f32-ae31-0c4464694f93-kube-proxy\") pod \"kube-proxy-f9np4\" (UID: \"4d9100a3-3237-4f32-ae31-0c4464694f93\") " pod="kube-system/kube-proxy-f9np4" May 13 23:48:00.908311 kubelet[2597]: I0513 23:48:00.908173 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d9100a3-3237-4f32-ae31-0c4464694f93-lib-modules\") pod \"kube-proxy-f9np4\" (UID: \"4d9100a3-3237-4f32-ae31-0c4464694f93\") " pod="kube-system/kube-proxy-f9np4" May 13 23:48:00.908311 kubelet[2597]: I0513 23:48:00.908198 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-lib-modules\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908311 kubelet[2597]: I0513 23:48:00.908218 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-config-path\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908353 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d9100a3-3237-4f32-ae31-0c4464694f93-xtables-lock\") pod \"kube-proxy-f9np4\" (UID: \"4d9100a3-3237-4f32-ae31-0c4464694f93\") " pod="kube-system/kube-proxy-f9np4" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908424 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-hubble-tls\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908451 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-hostproc\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908471 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-xtables-lock\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908494 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d90859b-f43a-479f-baf3-89c1b7de86d7-clustermesh-secrets\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908525 kubelet[2597]: I0513 23:48:00.908520 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9rw\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908718 kubelet[2597]: I0513 23:48:00.908554 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-bpf-maps\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908718 kubelet[2597]: I0513 23:48:00.908590 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-cgroup\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908718 kubelet[2597]: I0513 23:48:00.908613 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cni-path\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908817 kubelet[2597]: I0513 23:48:00.908678 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-run\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:00.908817 kubelet[2597]: I0513 23:48:00.908749 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-etc-cni-netd\") pod \"cilium-lgbkh\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " pod="kube-system/cilium-lgbkh" May 13 23:48:01.025141 kubelet[2597]: E0513 23:48:01.024860 2597 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:48:01.025141 kubelet[2597]: E0513 23:48:01.024893 2597 projected.go:194] Error preparing data for projected volume kube-api-access-2f9rw for pod kube-system/cilium-lgbkh: configmap "kube-root-ca.crt" not found May 13 23:48:01.025141 kubelet[2597]: E0513 23:48:01.024955 2597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw podName:6d90859b-f43a-479f-baf3-89c1b7de86d7 nodeName:}" failed. No retries permitted until 2025-05-13 23:48:01.524933036 +0000 UTC m=+6.222156037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2f9rw" (UniqueName: "kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw") pod "cilium-lgbkh" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7") : configmap "kube-root-ca.crt" not found May 13 23:48:01.025394 kubelet[2597]: E0513 23:48:01.025366 2597 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:48:01.025394 kubelet[2597]: E0513 23:48:01.025393 2597 projected.go:194] Error preparing data for projected volume kube-api-access-hwf7g for pod kube-system/kube-proxy-f9np4: configmap "kube-root-ca.crt" not found May 13 23:48:01.025504 kubelet[2597]: E0513 23:48:01.025461 2597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d9100a3-3237-4f32-ae31-0c4464694f93-kube-api-access-hwf7g podName:4d9100a3-3237-4f32-ae31-0c4464694f93 nodeName:}" failed. No retries permitted until 2025-05-13 23:48:01.52544706 +0000 UTC m=+6.222670061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwf7g" (UniqueName: "kubernetes.io/projected/4d9100a3-3237-4f32-ae31-0c4464694f93-kube-api-access-hwf7g") pod "kube-proxy-f9np4" (UID: "4d9100a3-3237-4f32-ae31-0c4464694f93") : configmap "kube-root-ca.crt" not found May 13 23:48:01.277212 systemd[1]: Created slice kubepods-besteffort-pod983710ce_433a_4547_b775_1367d88b1600.slice - libcontainer container kubepods-besteffort-pod983710ce_433a_4547_b775_1367d88b1600.slice. May 13 23:48:01.312669 kubelet[2597]: I0513 23:48:01.312587 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9k6\" (UniqueName: \"kubernetes.io/projected/983710ce-433a-4547-b775-1367d88b1600-kube-api-access-fm9k6\") pod \"cilium-operator-6c4d7847fc-tv6c5\" (UID: \"983710ce-433a-4547-b775-1367d88b1600\") " pod="kube-system/cilium-operator-6c4d7847fc-tv6c5" May 13 23:48:01.312669 kubelet[2597]: I0513 23:48:01.312657 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983710ce-433a-4547-b775-1367d88b1600-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tv6c5\" (UID: \"983710ce-433a-4547-b775-1367d88b1600\") " pod="kube-system/cilium-operator-6c4d7847fc-tv6c5" May 13 23:48:01.581203 kubelet[2597]: E0513 23:48:01.581014 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.581868 containerd[1489]: time="2025-05-13T23:48:01.581794957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tv6c5,Uid:983710ce-433a-4547-b775-1367d88b1600,Namespace:kube-system,Attempt:0,}" May 13 23:48:01.604918 containerd[1489]: time="2025-05-13T23:48:01.604863534Z" level=info msg="connecting to shim 21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19" address="unix:///run/containerd/s/05ac07d1facf1fd46eb8e8d6d163104701542adee1ad595db4119c7e65d525d0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:01.637692 systemd[1]: Started cri-containerd-21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19.scope - libcontainer container 21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19. May 13 23:48:01.682583 containerd[1489]: time="2025-05-13T23:48:01.682528967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tv6c5,Uid:983710ce-433a-4547-b775-1367d88b1600,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\"" May 13 23:48:01.683465 kubelet[2597]: E0513 23:48:01.683373 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.684674 containerd[1489]: time="2025-05-13T23:48:01.684591037Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:48:01.753797 kubelet[2597]: E0513 23:48:01.753724 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.754373 containerd[1489]: time="2025-05-13T23:48:01.754313977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9np4,Uid:4d9100a3-3237-4f32-ae31-0c4464694f93,Namespace:kube-system,Attempt:0,}" May 13 23:48:01.758489 kubelet[2597]: E0513 23:48:01.758398 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.758942 containerd[1489]: time="2025-05-13T23:48:01.758906494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgbkh,Uid:6d90859b-f43a-479f-baf3-89c1b7de86d7,Namespace:kube-system,Attempt:0,}" May 13 23:48:01.810917 containerd[1489]: time="2025-05-13T23:48:01.810867996Z" level=info msg="connecting to shim f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100" address="unix:///run/containerd/s/29d0f26566e841b09f0532bd9cdfb39bb7765ddb20830f285f88f4f1f221fca9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:01.811613 containerd[1489]: time="2025-05-13T23:48:01.811527053Z" level=info msg="connecting to shim a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:01.858591 systemd[1]: Started cri-containerd-f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100.scope - libcontainer container f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100. May 13 23:48:01.862767 systemd[1]: Started cri-containerd-a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd.scope - libcontainer container a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd. May 13 23:48:01.889319 containerd[1489]: time="2025-05-13T23:48:01.888785313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9np4,Uid:4d9100a3-3237-4f32-ae31-0c4464694f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100\"" May 13 23:48:01.889540 kubelet[2597]: E0513 23:48:01.889509 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.892976 containerd[1489]: time="2025-05-13T23:48:01.892932374Z" level=info msg="CreateContainer within sandbox \"f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:48:01.894498 containerd[1489]: time="2025-05-13T23:48:01.894466102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgbkh,Uid:6d90859b-f43a-479f-baf3-89c1b7de86d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\"" May 13 23:48:01.895032 kubelet[2597]: E0513 23:48:01.894987 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:01.906501 containerd[1489]: time="2025-05-13T23:48:01.906463615Z" level=info msg="Container 033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:01.916466 containerd[1489]: time="2025-05-13T23:48:01.916424255Z" level=info msg="CreateContainer within sandbox \"f23c264ec94010ff0062ffb83c47150bb741c24b046b492398c31226f0826100\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5\"" May 13 23:48:01.916983 containerd[1489]: time="2025-05-13T23:48:01.916955652Z" level=info msg="StartContainer for \"033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5\"" May 13 23:48:01.918271 containerd[1489]: time="2025-05-13T23:48:01.918247506Z" level=info msg="connecting to shim 033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5" address="unix:///run/containerd/s/29d0f26566e841b09f0532bd9cdfb39bb7765ddb20830f285f88f4f1f221fca9" protocol=ttrpc version=3 May 13 23:48:01.936559 systemd[1]: Started cri-containerd-033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5.scope - libcontainer container 033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5. May 13 23:48:01.983185 containerd[1489]: time="2025-05-13T23:48:01.982870897Z" level=info msg="StartContainer for \"033ecf450b6a7eda6cd75d4adf36df18541629cde7e15587e86dbe1743d81cd5\" returns successfully" May 13 23:48:02.411441 kubelet[2597]: E0513 23:48:02.410848 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:02.424578 kubelet[2597]: I0513 23:48:02.424506 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9np4" podStartSLOduration=2.424484685 podStartE2EDuration="2.424484685s" podCreationTimestamp="2025-05-13 23:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:02.424427445 +0000 UTC m=+7.121650457" watchObservedRunningTime="2025-05-13 23:48:02.424484685 +0000 UTC m=+7.121707686" May 13 23:48:04.485294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271983817.mount: Deactivated successfully. May 13 23:48:05.785833 kubelet[2597]: E0513 23:48:05.785772 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:06.420595 kubelet[2597]: E0513 23:48:06.420553 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:06.568965 containerd[1489]: time="2025-05-13T23:48:06.568887116Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.569918 containerd[1489]: time="2025-05-13T23:48:06.569855647Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 23:48:06.571180 containerd[1489]: time="2025-05-13T23:48:06.571143056Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.572448 containerd[1489]: time="2025-05-13T23:48:06.572420397Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.887775789s" May 13 23:48:06.572498 containerd[1489]: time="2025-05-13T23:48:06.572450905Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 23:48:06.573642 containerd[1489]: time="2025-05-13T23:48:06.573512894Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:48:06.574940 containerd[1489]: time="2025-05-13T23:48:06.574899033Z" level=info msg="CreateContainer within sandbox \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:48:06.588035 containerd[1489]: time="2025-05-13T23:48:06.587969744Z" level=info msg="Container 990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:06.593859 containerd[1489]: time="2025-05-13T23:48:06.593808770Z" level=info msg="CreateContainer within sandbox \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\"" May 13 23:48:06.594519 containerd[1489]: time="2025-05-13T23:48:06.594488128Z" level=info msg="StartContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\"" May 13 23:48:06.595630 containerd[1489]: time="2025-05-13T23:48:06.595585726Z" level=info msg="connecting to shim 990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853" address="unix:///run/containerd/s/05ac07d1facf1fd46eb8e8d6d163104701542adee1ad595db4119c7e65d525d0" protocol=ttrpc version=3 May 13 23:48:06.622725 systemd[1]: Started cri-containerd-990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853.scope - libcontainer container 990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853. May 13 23:48:06.657851 containerd[1489]: time="2025-05-13T23:48:06.657788285Z" level=info msg="StartContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" returns successfully" May 13 23:48:07.423910 kubelet[2597]: E0513 23:48:07.423856 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:07.424567 kubelet[2597]: E0513 23:48:07.424003 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:07.517611 kubelet[2597]: E0513 23:48:07.517573 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:07.526758 kubelet[2597]: I0513 23:48:07.526677 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tv6c5" podStartSLOduration=1.637594939 podStartE2EDuration="6.526659816s" podCreationTimestamp="2025-05-13 23:48:01 +0000 UTC" firstStartedPulling="2025-05-13 23:48:01.684200925 +0000 UTC m=+6.381423926" lastFinishedPulling="2025-05-13 23:48:06.573265802 +0000 UTC m=+11.270488803" observedRunningTime="2025-05-13 23:48:07.433419335 +0000 UTC m=+12.130642336" watchObservedRunningTime="2025-05-13 23:48:07.526659816 +0000 UTC m=+12.223882817" May 13 23:48:08.402513 kubelet[2597]: E0513 23:48:08.402448 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:08.431029 kubelet[2597]: E0513 23:48:08.430769 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:14.823727 update_engine[1471]: I20250513 23:48:14.823513 1471 update_attempter.cc:509] Updating boot flags... May 13 23:48:14.866493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3036) May 13 23:48:14.915465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3040) May 13 23:48:20.937443 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). May 13 23:48:20.988114 sshd[3048]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:20.990833 sshd-session[3048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:21.003556 systemd-logind[1469]: New session 8 of user core. May 13 23:48:21.009784 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:48:21.214061 sshd[3050]: Connection closed by 10.0.0.1 port 54840 May 13 23:48:21.215749 sshd-session[3048]: pam_unix(sshd:session): session closed for user core May 13 23:48:21.221046 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:54840.service: Deactivated successfully. May 13 23:48:21.224063 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:48:21.225578 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. May 13 23:48:21.230495 systemd-logind[1469]: Removed session 8. May 13 23:48:22.545653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692140680.mount: Deactivated successfully. May 13 23:48:26.103550 containerd[1489]: time="2025-05-13T23:48:26.103446838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:26.105394 containerd[1489]: time="2025-05-13T23:48:26.105307366Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 23:48:26.108142 containerd[1489]: time="2025-05-13T23:48:26.108060779Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:26.110636 containerd[1489]: time="2025-05-13T23:48:26.110559982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.537003055s" May 13 23:48:26.110636 containerd[1489]: time="2025-05-13T23:48:26.110622650Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 23:48:26.116327 containerd[1489]: time="2025-05-13T23:48:26.116273167Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:48:26.145625 containerd[1489]: time="2025-05-13T23:48:26.138309753Z" level=info msg="Container e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:26.176512 containerd[1489]: time="2025-05-13T23:48:26.171724607Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\"" May 13 23:48:26.176512 containerd[1489]: time="2025-05-13T23:48:26.176061235Z" level=info msg="StartContainer for \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\"" May 13 23:48:26.177715 containerd[1489]: time="2025-05-13T23:48:26.177609605Z" level=info msg="connecting to shim e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" protocol=ttrpc version=3 May 13 23:48:26.252782 systemd[1]: Started cri-containerd-e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583.scope - libcontainer container e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583. May 13 23:48:26.259545 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:54850.service - OpenSSH per-connection server daemon (10.0.0.1:54850). May 13 23:48:26.370644 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 54850 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:26.369039 sshd-session[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:26.388092 systemd-logind[1469]: New session 9 of user core. May 13 23:48:26.390924 containerd[1489]: time="2025-05-13T23:48:26.389074448Z" level=info msg="StartContainer for \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" returns successfully" May 13 23:48:26.398661 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:48:26.408237 systemd[1]: cri-containerd-e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583.scope: Deactivated successfully. May 13 23:48:26.411130 containerd[1489]: time="2025-05-13T23:48:26.410305365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" id:\"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" pid:3107 exited_at:{seconds:1747180106 nanos:409176837}" May 13 23:48:26.411130 containerd[1489]: time="2025-05-13T23:48:26.410533005Z" level=info msg="received exit event container_id:\"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" id:\"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" pid:3107 exited_at:{seconds:1747180106 nanos:409176837}" May 13 23:48:26.408636 systemd[1]: cri-containerd-e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583.scope: Consumed 48ms CPU time, 6.6M memory peak, 4K read from disk, 3.2M written to disk. May 13 23:48:26.482641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583-rootfs.mount: Deactivated successfully. May 13 23:48:26.584903 sshd[3126]: Connection closed by 10.0.0.1 port 54850 May 13 23:48:26.586649 sshd-session[3105]: pam_unix(sshd:session): session closed for user core May 13 23:48:26.592727 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:54850.service: Deactivated successfully. May 13 23:48:26.595836 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:48:26.597291 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. May 13 23:48:26.598963 systemd-logind[1469]: Removed session 9. May 13 23:48:27.314146 kubelet[2597]: E0513 23:48:27.314082 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:27.316743 containerd[1489]: time="2025-05-13T23:48:27.316679351Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:48:27.330767 containerd[1489]: time="2025-05-13T23:48:27.330720061Z" level=info msg="Container a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:27.341060 containerd[1489]: time="2025-05-13T23:48:27.340970535Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\"" May 13 23:48:27.343071 containerd[1489]: time="2025-05-13T23:48:27.341793096Z" level=info msg="StartContainer for \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\"" May 13 23:48:27.343071 containerd[1489]: time="2025-05-13T23:48:27.342817577Z" level=info msg="connecting to shim a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" protocol=ttrpc version=3 May 13 23:48:27.368795 systemd[1]: Started cri-containerd-a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd.scope - libcontainer container a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd. May 13 23:48:27.415679 containerd[1489]: time="2025-05-13T23:48:27.415617832Z" level=info msg="StartContainer for \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" returns successfully" May 13 23:48:27.427875 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:48:27.428670 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:27.429089 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:48:27.431504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:48:27.434063 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:48:27.434901 containerd[1489]: time="2025-05-13T23:48:27.434840411Z" level=info msg="received exit event container_id:\"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" id:\"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" pid:3163 exited_at:{seconds:1747180107 nanos:434576323}" May 13 23:48:27.435178 containerd[1489]: time="2025-05-13T23:48:27.435150506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" id:\"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" pid:3163 exited_at:{seconds:1747180107 nanos:434576323}" May 13 23:48:27.436793 systemd[1]: cri-containerd-a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd.scope: Deactivated successfully. May 13 23:48:27.461972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:28.318120 kubelet[2597]: E0513 23:48:28.318073 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:28.323111 containerd[1489]: time="2025-05-13T23:48:28.323052094Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:48:28.332397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd-rootfs.mount: Deactivated successfully. May 13 23:48:28.345643 containerd[1489]: time="2025-05-13T23:48:28.345580653Z" level=info msg="Container a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:28.356677 containerd[1489]: time="2025-05-13T23:48:28.356626040Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\"" May 13 23:48:28.357251 containerd[1489]: time="2025-05-13T23:48:28.357212505Z" level=info msg="StartContainer for \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\"" May 13 23:48:28.358665 containerd[1489]: time="2025-05-13T23:48:28.358639054Z" level=info msg="connecting to shim a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" protocol=ttrpc version=3 May 13 23:48:28.380736 systemd[1]: Started cri-containerd-a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce.scope - libcontainer container a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce. May 13 23:48:28.431376 systemd[1]: cri-containerd-a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce.scope: Deactivated successfully. May 13 23:48:28.432643 containerd[1489]: time="2025-05-13T23:48:28.432603052Z" level=info msg="received exit event container_id:\"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" id:\"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" pid:3209 exited_at:{seconds:1747180108 nanos:432302365}" May 13 23:48:28.432884 containerd[1489]: time="2025-05-13T23:48:28.432792538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" id:\"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" pid:3209 exited_at:{seconds:1747180108 nanos:432302365}" May 13 23:48:28.433891 containerd[1489]: time="2025-05-13T23:48:28.433850052Z" level=info msg="StartContainer for \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" returns successfully" May 13 23:48:28.461002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce-rootfs.mount: Deactivated successfully. May 13 23:48:29.323609 kubelet[2597]: E0513 23:48:29.323569 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:29.325179 containerd[1489]: time="2025-05-13T23:48:29.325127783Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:48:29.338009 containerd[1489]: time="2025-05-13T23:48:29.337950733Z" level=info msg="Container d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:29.348914 containerd[1489]: time="2025-05-13T23:48:29.348863164Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\"" May 13 23:48:29.349352 containerd[1489]: time="2025-05-13T23:48:29.349320445Z" level=info msg="StartContainer for \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\"" May 13 23:48:29.350255 containerd[1489]: time="2025-05-13T23:48:29.350222465Z" level=info msg="connecting to shim d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" protocol=ttrpc version=3 May 13 23:48:29.372652 systemd[1]: Started cri-containerd-d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf.scope - libcontainer container d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf. May 13 23:48:29.403184 containerd[1489]: time="2025-05-13T23:48:29.403124631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" id:\"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" pid:3248 exited_at:{seconds:1747180109 nanos:402707485}" May 13 23:48:29.403536 systemd[1]: cri-containerd-d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf.scope: Deactivated successfully. May 13 23:48:29.408936 containerd[1489]: time="2025-05-13T23:48:29.408867412Z" level=info msg="received exit event container_id:\"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" id:\"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" pid:3248 exited_at:{seconds:1747180109 nanos:402707485}" May 13 23:48:29.418794 containerd[1489]: time="2025-05-13T23:48:29.418742267Z" level=info msg="StartContainer for \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" returns successfully" May 13 23:48:29.436117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf-rootfs.mount: Deactivated successfully. May 13 23:48:30.329862 kubelet[2597]: E0513 23:48:30.329825 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:30.331638 containerd[1489]: time="2025-05-13T23:48:30.331586392Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:48:30.348193 containerd[1489]: time="2025-05-13T23:48:30.348138731Z" level=info msg="Container 5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:30.351991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074569707.mount: Deactivated successfully. May 13 23:48:30.359529 containerd[1489]: time="2025-05-13T23:48:30.359485555Z" level=info msg="CreateContainer within sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\"" May 13 23:48:30.359972 containerd[1489]: time="2025-05-13T23:48:30.359931285Z" level=info msg="StartContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\"" May 13 23:48:30.360988 containerd[1489]: time="2025-05-13T23:48:30.360960723Z" level=info msg="connecting to shim 5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc" address="unix:///run/containerd/s/66ab3125d181ea48434843340b58728c329a48e4b71c22aab8351be03f5ba531" protocol=ttrpc version=3 May 13 23:48:30.384663 systemd[1]: Started cri-containerd-5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc.scope - libcontainer container 5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc. May 13 23:48:30.427776 containerd[1489]: time="2025-05-13T23:48:30.427727420Z" level=info msg="StartContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" returns successfully" May 13 23:48:30.500439 containerd[1489]: time="2025-05-13T23:48:30.499357328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" id:\"d80977043982dcdc4f5ad2397be5bbe21480ea19d0eba45d6a79a19cb1c9ebf8\" pid:3318 exited_at:{seconds:1747180110 nanos:499065979}" May 13 23:48:30.509894 kubelet[2597]: I0513 23:48:30.509353 2597 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:48:30.537719 kubelet[2597]: I0513 23:48:30.537658 2597 status_manager.go:890] "Failed to get status for pod" podUID="dde9684a-0438-4eea-8d20-fded41911ac5" pod="kube-system/coredns-668d6bf9bc-dzsrg" err="pods \"coredns-668d6bf9bc-dzsrg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 13 23:48:30.543141 systemd[1]: Created slice kubepods-burstable-poddde9684a_0438_4eea_8d20_fded41911ac5.slice - libcontainer container kubepods-burstable-poddde9684a_0438_4eea_8d20_fded41911ac5.slice. May 13 23:48:30.550519 systemd[1]: Created slice kubepods-burstable-podfc534248_d016_4605_81ea_74d631295c81.slice - libcontainer container kubepods-burstable-podfc534248_d016_4605_81ea_74d631295c81.slice. May 13 23:48:30.650127 kubelet[2597]: I0513 23:48:30.650076 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dde9684a-0438-4eea-8d20-fded41911ac5-config-volume\") pod \"coredns-668d6bf9bc-dzsrg\" (UID: \"dde9684a-0438-4eea-8d20-fded41911ac5\") " pod="kube-system/coredns-668d6bf9bc-dzsrg" May 13 23:48:30.650127 kubelet[2597]: I0513 23:48:30.650124 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hkh\" (UniqueName: \"kubernetes.io/projected/dde9684a-0438-4eea-8d20-fded41911ac5-kube-api-access-77hkh\") pod \"coredns-668d6bf9bc-dzsrg\" (UID: \"dde9684a-0438-4eea-8d20-fded41911ac5\") " pod="kube-system/coredns-668d6bf9bc-dzsrg" May 13 23:48:30.650127 kubelet[2597]: I0513 23:48:30.650142 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc534248-d016-4605-81ea-74d631295c81-config-volume\") pod \"coredns-668d6bf9bc-mgvdq\" (UID: \"fc534248-d016-4605-81ea-74d631295c81\") " pod="kube-system/coredns-668d6bf9bc-mgvdq" May 13 23:48:30.650358 kubelet[2597]: I0513 23:48:30.650163 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk46t\" (UniqueName: \"kubernetes.io/projected/fc534248-d016-4605-81ea-74d631295c81-kube-api-access-qk46t\") pod \"coredns-668d6bf9bc-mgvdq\" (UID: \"fc534248-d016-4605-81ea-74d631295c81\") " pod="kube-system/coredns-668d6bf9bc-mgvdq" May 13 23:48:30.847280 kubelet[2597]: E0513 23:48:30.846491 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:30.848326 containerd[1489]: time="2025-05-13T23:48:30.848027199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzsrg,Uid:dde9684a-0438-4eea-8d20-fded41911ac5,Namespace:kube-system,Attempt:0,}" May 13 23:48:30.853118 kubelet[2597]: E0513 23:48:30.853077 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:30.853652 containerd[1489]: time="2025-05-13T23:48:30.853609124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgvdq,Uid:fc534248-d016-4605-81ea-74d631295c81,Namespace:kube-system,Attempt:0,}" May 13 23:48:31.335787 kubelet[2597]: E0513 23:48:31.335755 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:31.349915 kubelet[2597]: I0513 23:48:31.349843 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lgbkh" podStartSLOduration=7.13102348 podStartE2EDuration="31.349823533s" podCreationTimestamp="2025-05-13 23:48:00 +0000 UTC" firstStartedPulling="2025-05-13 23:48:01.895397229 +0000 UTC m=+6.592620230" lastFinishedPulling="2025-05-13 23:48:26.114197271 +0000 UTC m=+30.811420283" observedRunningTime="2025-05-13 23:48:31.349448817 +0000 UTC m=+36.046671828" watchObservedRunningTime="2025-05-13 23:48:31.349823533 +0000 UTC m=+36.047046534" May 13 23:48:31.598044 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:55138.service - OpenSSH per-connection server daemon (10.0.0.1:55138). May 13 23:48:31.659170 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:31.661468 sshd-session[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:31.666876 systemd-logind[1469]: New session 10 of user core. May 13 23:48:31.677698 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:48:31.829809 sshd[3415]: Connection closed by 10.0.0.1 port 55138 May 13 23:48:31.830179 sshd-session[3413]: pam_unix(sshd:session): session closed for user core May 13 23:48:31.834306 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:55138.service: Deactivated successfully. May 13 23:48:31.836948 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:48:31.839462 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. May 13 23:48:31.840888 systemd-logind[1469]: Removed session 10. May 13 23:48:32.338513 kubelet[2597]: E0513 23:48:32.338462 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:32.662851 systemd-networkd[1420]: cilium_host: Link UP May 13 23:48:32.663109 systemd-networkd[1420]: cilium_net: Link UP May 13 23:48:32.663343 systemd-networkd[1420]: cilium_net: Gained carrier May 13 23:48:32.663581 systemd-networkd[1420]: cilium_host: Gained carrier May 13 23:48:32.785354 systemd-networkd[1420]: cilium_vxlan: Link UP May 13 23:48:32.785364 systemd-networkd[1420]: cilium_vxlan: Gained carrier May 13 23:48:33.018459 kernel: NET: Registered PF_ALG protocol family May 13 23:48:33.339966 kubelet[2597]: E0513 23:48:33.339914 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:33.573135 systemd-networkd[1420]: cilium_net: Gained IPv6LL May 13 23:48:33.636579 systemd-networkd[1420]: cilium_host: Gained IPv6LL May 13 23:48:33.753547 systemd-networkd[1420]: lxc_health: Link UP May 13 23:48:33.754014 systemd-networkd[1420]: lxc_health: Gained carrier May 13 23:48:33.928771 kernel: eth0: renamed from tmp0e32e May 13 23:48:33.952257 kernel: eth0: renamed from tmp5cd7b May 13 23:48:33.958128 systemd-networkd[1420]: lxc205f3178df9e: Link UP May 13 23:48:33.959617 systemd-networkd[1420]: lxc3d2ccd56379c: Link UP May 13 23:48:33.959866 systemd-networkd[1420]: lxc205f3178df9e: Gained carrier May 13 23:48:33.963308 systemd-networkd[1420]: lxc3d2ccd56379c: Gained carrier May 13 23:48:34.277252 systemd-networkd[1420]: cilium_vxlan: Gained IPv6LL May 13 23:48:34.341697 kubelet[2597]: E0513 23:48:34.341636 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:35.343555 kubelet[2597]: E0513 23:48:35.343500 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:35.620953 systemd-networkd[1420]: lxc_health: Gained IPv6LL May 13 23:48:35.940694 systemd-networkd[1420]: lxc205f3178df9e: Gained IPv6LL May 13 23:48:35.941127 systemd-networkd[1420]: lxc3d2ccd56379c: Gained IPv6LL May 13 23:48:36.350274 kubelet[2597]: E0513 23:48:36.350144 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:36.848822 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). May 13 23:48:36.909684 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:36.912334 sshd-session[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:36.917939 systemd-logind[1469]: New session 11 of user core. May 13 23:48:36.925747 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:48:37.068695 sshd[3812]: Connection closed by 10.0.0.1 port 57208 May 13 23:48:37.069023 sshd-session[3810]: pam_unix(sshd:session): session closed for user core May 13 23:48:37.073011 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:57208.service: Deactivated successfully. May 13 23:48:37.075345 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:48:37.076206 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. May 13 23:48:37.077035 systemd-logind[1469]: Removed session 11. May 13 23:48:38.312603 containerd[1489]: time="2025-05-13T23:48:38.312545679Z" level=info msg="connecting to shim 0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae" address="unix:///run/containerd/s/d38393d2d7cf1d834172be09f6eefd59df111e005fc8f07183e4f1c4ab32c5ba" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:38.332844 containerd[1489]: time="2025-05-13T23:48:38.332766268Z" level=info msg="connecting to shim 5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b" address="unix:///run/containerd/s/8d679922a3e1d2bbe6b6e48d8f604971414a17ddf96313b0434bb3ec2c18a732" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:38.342646 systemd[1]: Started cri-containerd-0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae.scope - libcontainer container 0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae. May 13 23:48:38.361583 systemd[1]: Started cri-containerd-5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b.scope - libcontainer container 5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b. May 13 23:48:38.367587 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:38.376946 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:38.421005 containerd[1489]: time="2025-05-13T23:48:38.420950270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzsrg,Uid:dde9684a-0438-4eea-8d20-fded41911ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae\"" May 13 23:48:38.421779 kubelet[2597]: E0513 23:48:38.421742 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:38.423616 containerd[1489]: time="2025-05-13T23:48:38.423473646Z" level=info msg="CreateContainer within sandbox \"0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:38.431975 containerd[1489]: time="2025-05-13T23:48:38.431001404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgvdq,Uid:fc534248-d016-4605-81ea-74d631295c81,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b\"" May 13 23:48:38.433293 kubelet[2597]: E0513 23:48:38.433252 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:38.441873 containerd[1489]: time="2025-05-13T23:48:38.440477166Z" level=info msg="CreateContainer within sandbox \"5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:38.503617 containerd[1489]: time="2025-05-13T23:48:38.503553327Z" level=info msg="Container f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:38.509781 containerd[1489]: time="2025-05-13T23:48:38.509708964Z" level=info msg="Container e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:38.551989 containerd[1489]: time="2025-05-13T23:48:38.550820474Z" level=info msg="CreateContainer within sandbox \"0e32e291fe24815f874c3d4f1bf7e4d40e605b9c29ced58ceed05dd68fc11fae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d\"" May 13 23:48:38.552625 containerd[1489]: time="2025-05-13T23:48:38.552593179Z" level=info msg="StartContainer for \"f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d\"" May 13 23:48:38.556339 containerd[1489]: time="2025-05-13T23:48:38.555054668Z" level=info msg="connecting to shim f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d" address="unix:///run/containerd/s/d38393d2d7cf1d834172be09f6eefd59df111e005fc8f07183e4f1c4ab32c5ba" protocol=ttrpc version=3 May 13 23:48:38.571135 containerd[1489]: time="2025-05-13T23:48:38.570894989Z" level=info msg="CreateContainer within sandbox \"5cd7bfcd5b2ae5d7123b606894c868a508dd733ccd6b18412bc50f617f2f194b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5\"" May 13 23:48:38.574484 containerd[1489]: time="2025-05-13T23:48:38.572707098Z" level=info msg="StartContainer for \"e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5\"" May 13 23:48:38.574890 containerd[1489]: time="2025-05-13T23:48:38.574827766Z" level=info msg="connecting to shim e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5" address="unix:///run/containerd/s/8d679922a3e1d2bbe6b6e48d8f604971414a17ddf96313b0434bb3ec2c18a732" protocol=ttrpc version=3 May 13 23:48:38.579625 systemd[1]: Started cri-containerd-f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d.scope - libcontainer container f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d. May 13 23:48:38.620588 systemd[1]: Started cri-containerd-e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5.scope - libcontainer container e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5. May 13 23:48:38.634075 containerd[1489]: time="2025-05-13T23:48:38.634037335Z" level=info msg="StartContainer for \"f88a68081dfbd2d0ff36d22e4276c0f9bb05dd66ff903382091f542a36366e4d\" returns successfully" May 13 23:48:38.677580 containerd[1489]: time="2025-05-13T23:48:38.677053067Z" level=info msg="StartContainer for \"e9d54120e521a395a319fc5153468c9b781551cc93da6781e10627674823f4f5\" returns successfully" May 13 23:48:39.308019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527755392.mount: Deactivated successfully. May 13 23:48:39.371388 kubelet[2597]: E0513 23:48:39.371342 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:39.373601 kubelet[2597]: E0513 23:48:39.373557 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:39.464660 kubelet[2597]: I0513 23:48:39.464585 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dzsrg" podStartSLOduration=38.464568996 podStartE2EDuration="38.464568996s" podCreationTimestamp="2025-05-13 23:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:39.464135711 +0000 UTC m=+44.161358712" watchObservedRunningTime="2025-05-13 23:48:39.464568996 +0000 UTC m=+44.161791987" May 13 23:48:39.465329 kubelet[2597]: I0513 23:48:39.464670 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mgvdq" podStartSLOduration=38.464665467 podStartE2EDuration="38.464665467s" podCreationTimestamp="2025-05-13 23:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:39.387214652 +0000 UTC m=+44.084437663" watchObservedRunningTime="2025-05-13 23:48:39.464665467 +0000 UTC m=+44.161888458" May 13 23:48:40.375610 kubelet[2597]: E0513 23:48:40.375570 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:40.375914 kubelet[2597]: E0513 23:48:40.375821 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:41.377345 kubelet[2597]: E0513 23:48:41.377273 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:41.380446 kubelet[2597]: E0513 23:48:41.379391 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:42.109966 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:57214.service - OpenSSH per-connection server daemon (10.0.0.1:57214). May 13 23:48:42.199649 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 57214 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:42.202386 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:42.214639 systemd-logind[1469]: New session 12 of user core. May 13 23:48:42.220044 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:48:42.355598 sshd[4003]: Connection closed by 10.0.0.1 port 57214 May 13 23:48:42.356135 sshd-session[4001]: pam_unix(sshd:session): session closed for user core May 13 23:48:42.369060 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:57214.service: Deactivated successfully. May 13 23:48:42.371914 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:48:42.374082 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. May 13 23:48:42.376309 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:57228.service - OpenSSH per-connection server daemon (10.0.0.1:57228). May 13 23:48:42.377671 systemd-logind[1469]: Removed session 12. May 13 23:48:42.431010 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 57228 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:42.432886 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:42.439228 systemd-logind[1469]: New session 13 of user core. May 13 23:48:42.450769 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:48:42.618305 sshd[4020]: Connection closed by 10.0.0.1 port 57228 May 13 23:48:42.618709 sshd-session[4017]: pam_unix(sshd:session): session closed for user core May 13 23:48:42.630529 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:57228.service: Deactivated successfully. May 13 23:48:42.632777 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:48:42.637156 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:57238.service - OpenSSH per-connection server daemon (10.0.0.1:57238). May 13 23:48:42.637390 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. May 13 23:48:42.639397 systemd-logind[1469]: Removed session 13. May 13 23:48:42.688474 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 57238 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:42.690787 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:42.696485 systemd-logind[1469]: New session 14 of user core. May 13 23:48:42.708712 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:48:42.822733 sshd[4034]: Connection closed by 10.0.0.1 port 57238 May 13 23:48:42.823053 sshd-session[4031]: pam_unix(sshd:session): session closed for user core May 13 23:48:42.829028 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:57238.service: Deactivated successfully. May 13 23:48:42.831427 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:48:42.832490 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. May 13 23:48:42.833602 systemd-logind[1469]: Removed session 14. May 13 23:48:47.839247 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). May 13 23:48:47.898268 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:47.900077 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:47.905261 systemd-logind[1469]: New session 15 of user core. May 13 23:48:47.910630 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:48:48.030490 sshd[4054]: Connection closed by 10.0.0.1 port 33886 May 13 23:48:48.030863 sshd-session[4052]: pam_unix(sshd:session): session closed for user core May 13 23:48:48.034543 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:33886.service: Deactivated successfully. May 13 23:48:48.037235 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:48:48.038985 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. May 13 23:48:48.040196 systemd-logind[1469]: Removed session 15. May 13 23:48:53.063078 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). May 13 23:48:53.132222 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:53.134339 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:53.144531 systemd-logind[1469]: New session 16 of user core. May 13 23:48:53.155752 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:48:53.339563 sshd[4069]: Connection closed by 10.0.0.1 port 33902 May 13 23:48:53.341284 sshd-session[4067]: pam_unix(sshd:session): session closed for user core May 13 23:48:53.345380 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:33902.service: Deactivated successfully. May 13 23:48:53.348052 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:48:53.351922 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. May 13 23:48:53.353798 systemd-logind[1469]: Removed session 16. May 13 23:48:58.367033 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:33196.service - OpenSSH per-connection server daemon (10.0.0.1:33196). May 13 23:48:58.463176 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 33196 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:58.464003 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.475301 systemd-logind[1469]: New session 17 of user core. May 13 23:48:58.483771 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:48:58.706606 sshd[4087]: Connection closed by 10.0.0.1 port 33196 May 13 23:48:58.707352 sshd-session[4085]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.754551 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:33196.service: Deactivated successfully. May 13 23:48:58.760496 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:48:58.773521 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. May 13 23:48:58.774732 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:33198.service - OpenSSH per-connection server daemon (10.0.0.1:33198). May 13 23:48:58.850484 systemd-logind[1469]: Removed session 17. May 13 23:48:58.928509 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 33198 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:58.932261 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.945494 systemd-logind[1469]: New session 18 of user core. May 13 23:48:58.956719 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:48:59.448467 sshd[4102]: Connection closed by 10.0.0.1 port 33198 May 13 23:48:59.451671 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 13 23:48:59.489696 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:33198.service: Deactivated successfully. May 13 23:48:59.492451 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:48:59.494247 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. May 13 23:48:59.498147 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:33212.service - OpenSSH per-connection server daemon (10.0.0.1:33212). May 13 23:48:59.499248 systemd-logind[1469]: Removed session 18. May 13 23:48:59.603304 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 33212 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:48:59.607128 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:59.621546 systemd-logind[1469]: New session 19 of user core. May 13 23:48:59.638672 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:49:00.876982 sshd[4115]: Connection closed by 10.0.0.1 port 33212 May 13 23:49:00.880486 sshd-session[4112]: pam_unix(sshd:session): session closed for user core May 13 23:49:00.894692 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:33212.service: Deactivated successfully. May 13 23:49:00.905525 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:49:00.920923 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. May 13 23:49:00.927324 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:33226.service - OpenSSH per-connection server daemon (10.0.0.1:33226). May 13 23:49:00.933577 systemd-logind[1469]: Removed session 19. May 13 23:49:01.014665 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 33226 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:01.016522 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:01.031318 systemd-logind[1469]: New session 20 of user core. May 13 23:49:01.039785 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:49:01.500878 sshd[4138]: Connection closed by 10.0.0.1 port 33226 May 13 23:49:01.501815 sshd-session[4135]: pam_unix(sshd:session): session closed for user core May 13 23:49:01.518101 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:33226.service: Deactivated successfully. May 13 23:49:01.521834 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:49:01.523991 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. May 13 23:49:01.527949 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:33232.service - OpenSSH per-connection server daemon (10.0.0.1:33232). May 13 23:49:01.529233 systemd-logind[1469]: Removed session 20. May 13 23:49:01.589970 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 33232 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:01.592205 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:01.607543 systemd-logind[1469]: New session 21 of user core. May 13 23:49:01.616815 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:49:01.786481 sshd[4152]: Connection closed by 10.0.0.1 port 33232 May 13 23:49:01.786463 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 13 23:49:01.790691 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:33232.service: Deactivated successfully. May 13 23:49:01.793706 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:49:01.796030 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. May 13 23:49:01.799082 systemd-logind[1469]: Removed session 21. May 13 23:49:06.804501 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:40944.service - OpenSSH per-connection server daemon (10.0.0.1:40944). May 13 23:49:06.870150 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 40944 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:06.872456 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:06.879641 systemd-logind[1469]: New session 22 of user core. May 13 23:49:06.893799 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:49:07.036316 sshd[4169]: Connection closed by 10.0.0.1 port 40944 May 13 23:49:07.036390 sshd-session[4167]: pam_unix(sshd:session): session closed for user core May 13 23:49:07.042132 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:40944.service: Deactivated successfully. May 13 23:49:07.045202 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:49:07.046182 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. May 13 23:49:07.047253 systemd-logind[1469]: Removed session 22. May 13 23:49:10.387666 kubelet[2597]: E0513 23:49:10.384566 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:12.076769 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:40956.service - OpenSSH per-connection server daemon (10.0.0.1:40956). May 13 23:49:12.190813 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 40956 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:12.193559 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:12.202849 systemd-logind[1469]: New session 23 of user core. May 13 23:49:12.211889 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:49:12.450567 sshd[4186]: Connection closed by 10.0.0.1 port 40956 May 13 23:49:12.451380 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 13 23:49:12.455831 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:40956.service: Deactivated successfully. May 13 23:49:12.460226 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:49:12.465451 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. May 13 23:49:12.468298 systemd-logind[1469]: Removed session 23. May 13 23:49:15.385187 kubelet[2597]: E0513 23:49:15.385133 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:17.480608 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:43392.service - OpenSSH per-connection server daemon (10.0.0.1:43392). May 13 23:49:17.568384 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:17.570755 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:17.579728 systemd-logind[1469]: New session 24 of user core. May 13 23:49:17.589794 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:49:17.780033 sshd[4202]: Connection closed by 10.0.0.1 port 43392 May 13 23:49:17.780859 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 13 23:49:17.784938 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:43392.service: Deactivated successfully. May 13 23:49:17.789560 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:49:17.795740 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. May 13 23:49:17.797832 systemd-logind[1469]: Removed session 24. May 13 23:49:22.806148 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:43394.service - OpenSSH per-connection server daemon (10.0.0.1:43394). May 13 23:49:22.897257 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 43394 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:22.899181 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:22.914734 systemd-logind[1469]: New session 25 of user core. May 13 23:49:22.926593 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:49:23.156917 sshd[4218]: Connection closed by 10.0.0.1 port 43394 May 13 23:49:23.157542 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 13 23:49:23.177937 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:43394.service: Deactivated successfully. May 13 23:49:23.214567 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:49:23.229108 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. May 13 23:49:23.230214 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:43398.service - OpenSSH per-connection server daemon (10.0.0.1:43398). May 13 23:49:23.238585 systemd-logind[1469]: Removed session 25. May 13 23:49:23.305823 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 43398 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:23.308025 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:23.319005 systemd-logind[1469]: New session 26 of user core. May 13 23:49:23.328792 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:49:25.007847 containerd[1489]: time="2025-05-13T23:49:25.007793290Z" level=info msg="StopContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" with timeout 30 (s)" May 13 23:49:25.014234 containerd[1489]: time="2025-05-13T23:49:25.013835495Z" level=info msg="Stop container \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" with signal terminated" May 13 23:49:25.047270 systemd[1]: cri-containerd-990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853.scope: Deactivated successfully. May 13 23:49:25.051849 containerd[1489]: time="2025-05-13T23:49:25.051010815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" id:\"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" pid:3006 exited_at:{seconds:1747180165 nanos:49676519}" May 13 23:49:25.051849 containerd[1489]: time="2025-05-13T23:49:25.051481061Z" level=info msg="received exit event container_id:\"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" id:\"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" pid:3006 exited_at:{seconds:1747180165 nanos:49676519}" May 13 23:49:25.096386 containerd[1489]: time="2025-05-13T23:49:25.096316622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" id:\"db71eb88e554399bf8b3a3b0b3aee2c7fe8053f5f954dba2a2b71eb308bfe952\" pid:4262 exited_at:{seconds:1747180165 nanos:95926829}" May 13 23:49:25.099334 containerd[1489]: time="2025-05-13T23:49:25.099282309Z" level=info msg="StopContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" with timeout 2 (s)" May 13 23:49:25.100471 containerd[1489]: time="2025-05-13T23:49:25.099675068Z" level=info msg="Stop container \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" with signal terminated" May 13 23:49:25.123207 systemd-networkd[1420]: lxc_health: Link DOWN May 13 23:49:25.123219 systemd-networkd[1420]: lxc_health: Lost carrier May 13 23:49:25.146479 containerd[1489]: time="2025-05-13T23:49:25.145281661Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:49:25.160101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853-rootfs.mount: Deactivated successfully. May 13 23:49:25.166394 systemd[1]: cri-containerd-5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc.scope: Deactivated successfully. May 13 23:49:25.167025 systemd[1]: cri-containerd-5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc.scope: Consumed 8.289s CPU time, 125.6M memory peak, 160K read from disk, 13.3M written to disk. May 13 23:49:25.167920 containerd[1489]: time="2025-05-13T23:49:25.167865497Z" level=info msg="received exit event container_id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" pid:3286 exited_at:{seconds:1747180165 nanos:167581857}" May 13 23:49:25.168239 containerd[1489]: time="2025-05-13T23:49:25.168200377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" id:\"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" pid:3286 exited_at:{seconds:1747180165 nanos:167581857}" May 13 23:49:25.209440 containerd[1489]: time="2025-05-13T23:49:25.208595207Z" level=info msg="StopContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" returns successfully" May 13 23:49:25.209440 containerd[1489]: time="2025-05-13T23:49:25.209339898Z" level=info msg="StopPodSandbox for \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\"" May 13 23:49:25.225607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc-rootfs.mount: Deactivated successfully. May 13 23:49:25.239642 containerd[1489]: time="2025-05-13T23:49:25.236788074Z" level=info msg="Container to stop \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.247466 systemd[1]: cri-containerd-21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19.scope: Deactivated successfully. May 13 23:49:25.257254 containerd[1489]: time="2025-05-13T23:49:25.257173517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" id:\"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" pid:2713 exit_status:137 exited_at:{seconds:1747180165 nanos:249369662}" May 13 23:49:25.273581 containerd[1489]: time="2025-05-13T23:49:25.260200221Z" level=info msg="StopContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" returns successfully" May 13 23:49:25.277109 containerd[1489]: time="2025-05-13T23:49:25.277004745Z" level=info msg="StopPodSandbox for \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\"" May 13 23:49:25.277256 containerd[1489]: time="2025-05-13T23:49:25.277157066Z" level=info msg="Container to stop \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.277256 containerd[1489]: time="2025-05-13T23:49:25.277183395Z" level=info msg="Container to stop \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.277256 containerd[1489]: time="2025-05-13T23:49:25.277198875Z" level=info msg="Container to stop \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.277371 containerd[1489]: time="2025-05-13T23:49:25.277217702Z" level=info msg="Container to stop \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.277371 containerd[1489]: time="2025-05-13T23:49:25.277339092Z" level=info msg="Container to stop \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:49:25.296568 systemd[1]: cri-containerd-a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd.scope: Deactivated successfully. May 13 23:49:25.346127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19-rootfs.mount: Deactivated successfully. May 13 23:49:25.375704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd-rootfs.mount: Deactivated successfully. May 13 23:49:25.377338 containerd[1489]: time="2025-05-13T23:49:25.377298930Z" level=info msg="shim disconnected" id=21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19 namespace=k8s.io May 13 23:49:25.378096 containerd[1489]: time="2025-05-13T23:49:25.377965702Z" level=warning msg="cleaning up after shim disconnected" id=21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19 namespace=k8s.io May 13 23:49:25.385853 containerd[1489]: time="2025-05-13T23:49:25.377989828Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:49:25.386488 kubelet[2597]: E0513 23:49:25.386453 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:25.404675 containerd[1489]: time="2025-05-13T23:49:25.404565178Z" level=info msg="shim disconnected" id=a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd namespace=k8s.io May 13 23:49:25.404675 containerd[1489]: time="2025-05-13T23:49:25.404664007Z" level=warning msg="cleaning up after shim disconnected" id=a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd namespace=k8s.io May 13 23:49:25.404897 containerd[1489]: time="2025-05-13T23:49:25.404679788Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:49:25.450172 containerd[1489]: time="2025-05-13T23:49:25.449977881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" id:\"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" pid:2795 exit_status:137 exited_at:{seconds:1747180165 nanos:302882185}" May 13 23:49:25.455949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19-shm.mount: Deactivated successfully. May 13 23:49:25.506775 kubelet[2597]: E0513 23:49:25.506704 2597 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:49:25.507504 containerd[1489]: time="2025-05-13T23:49:25.507427292Z" level=info msg="TearDown network for sandbox \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" successfully" May 13 23:49:25.507504 containerd[1489]: time="2025-05-13T23:49:25.507482918Z" level=info msg="StopPodSandbox for \"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" returns successfully" May 13 23:49:25.510812 containerd[1489]: time="2025-05-13T23:49:25.510715214Z" level=info msg="TearDown network for sandbox \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" successfully" May 13 23:49:25.510812 containerd[1489]: time="2025-05-13T23:49:25.510795938Z" level=info msg="StopPodSandbox for \"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" returns successfully" May 13 23:49:25.513878 containerd[1489]: time="2025-05-13T23:49:25.513807102Z" level=info msg="received exit event sandbox_id:\"21a364390fca521e99b1fa172628891a177410fbc3491dfc0bc7e62e6aeb2e19\" exit_status:137 exited_at:{seconds:1747180165 nanos:249369662}" May 13 23:49:25.514436 containerd[1489]: time="2025-05-13T23:49:25.514122564Z" level=info msg="received exit event sandbox_id:\"a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd\" exit_status:137 exited_at:{seconds:1747180165 nanos:302882185}" May 13 23:49:25.632287 kubelet[2597]: I0513 23:49:25.631655 2597 scope.go:117] "RemoveContainer" containerID="5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc" May 13 23:49:25.636799 containerd[1489]: time="2025-05-13T23:49:25.636653694Z" level=info msg="RemoveContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\"" May 13 23:49:25.657239 containerd[1489]: time="2025-05-13T23:49:25.657046483Z" level=info msg="RemoveContainer for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" returns successfully" May 13 23:49:25.659049 kubelet[2597]: I0513 23:49:25.658992 2597 scope.go:117] "RemoveContainer" containerID="d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf" May 13 23:49:25.669689 containerd[1489]: time="2025-05-13T23:49:25.666145318Z" level=info msg="RemoveContainer for \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\"" May 13 23:49:25.681993 containerd[1489]: time="2025-05-13T23:49:25.680912455Z" level=info msg="RemoveContainer for \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" returns successfully" May 13 23:49:25.682174 kubelet[2597]: I0513 23:49:25.681251 2597 scope.go:117] "RemoveContainer" containerID="a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce" May 13 23:49:25.684877 containerd[1489]: time="2025-05-13T23:49:25.684820149Z" level=info msg="RemoveContainer for \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\"" May 13 23:49:25.686313 kubelet[2597]: I0513 23:49:25.686261 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-hubble-tls\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686313 kubelet[2597]: I0513 23:49:25.686307 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cni-path\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686333 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983710ce-433a-4547-b775-1367d88b1600-cilium-config-path\") pod \"983710ce-433a-4547-b775-1367d88b1600\" (UID: \"983710ce-433a-4547-b775-1367d88b1600\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686354 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-lib-modules\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686369 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-hostproc\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686384 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-xtables-lock\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686416 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f9rw\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.686486 kubelet[2597]: I0513 23:49:25.686437 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-kernel\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686461 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-cgroup\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686480 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-config-path\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686494 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-bpf-maps\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686511 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-etc-cni-netd\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686526 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-run\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.687959 kubelet[2597]: I0513 23:49:25.686543 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-net\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.688754 kubelet[2597]: I0513 23:49:25.686563 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d90859b-f43a-479f-baf3-89c1b7de86d7-clustermesh-secrets\") pod \"6d90859b-f43a-479f-baf3-89c1b7de86d7\" (UID: \"6d90859b-f43a-479f-baf3-89c1b7de86d7\") " May 13 23:49:25.688754 kubelet[2597]: I0513 23:49:25.686600 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm9k6\" (UniqueName: \"kubernetes.io/projected/983710ce-433a-4547-b775-1367d88b1600-kube-api-access-fm9k6\") pod \"983710ce-433a-4547-b775-1367d88b1600\" (UID: \"983710ce-433a-4547-b775-1367d88b1600\") " May 13 23:49:25.688754 kubelet[2597]: I0513 23:49:25.687090 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.688754 kubelet[2597]: I0513 23:49:25.687119 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.688754 kubelet[2597]: I0513 23:49:25.687166 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.689228 kubelet[2597]: I0513 23:49:25.687201 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.689228 kubelet[2597]: I0513 23:49:25.687463 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.689228 kubelet[2597]: I0513 23:49:25.687502 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.697397 kubelet[2597]: I0513 23:49:25.697339 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.698174 kubelet[2597]: I0513 23:49:25.697738 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.698317 kubelet[2597]: I0513 23:49:25.698295 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.698511 kubelet[2597]: I0513 23:49:25.698467 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 23:49:25.701140 kubelet[2597]: I0513 23:49:25.701011 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 23:49:25.704246 kubelet[2597]: I0513 23:49:25.704190 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983710ce-433a-4547-b775-1367d88b1600-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "983710ce-433a-4547-b775-1367d88b1600" (UID: "983710ce-433a-4547-b775-1367d88b1600"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 23:49:25.705536 kubelet[2597]: I0513 23:49:25.705468 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983710ce-433a-4547-b775-1367d88b1600-kube-api-access-fm9k6" (OuterVolumeSpecName: "kube-api-access-fm9k6") pod "983710ce-433a-4547-b775-1367d88b1600" (UID: "983710ce-433a-4547-b775-1367d88b1600"). InnerVolumeSpecName "kube-api-access-fm9k6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:49:25.707249 kubelet[2597]: I0513 23:49:25.707153 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:49:25.708154 kubelet[2597]: I0513 23:49:25.708045 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw" (OuterVolumeSpecName: "kube-api-access-2f9rw") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "kube-api-access-2f9rw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 23:49:25.712771 kubelet[2597]: I0513 23:49:25.712675 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d90859b-f43a-479f-baf3-89c1b7de86d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d90859b-f43a-479f-baf3-89c1b7de86d7" (UID: "6d90859b-f43a-479f-baf3-89c1b7de86d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 23:49:25.733236 containerd[1489]: time="2025-05-13T23:49:25.733126149Z" level=info msg="RemoveContainer for \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" returns successfully" May 13 23:49:25.733789 kubelet[2597]: I0513 23:49:25.733714 2597 scope.go:117] "RemoveContainer" containerID="a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd" May 13 23:49:25.736120 containerd[1489]: time="2025-05-13T23:49:25.736046891Z" level=info msg="RemoveContainer for \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\"" May 13 23:49:25.742821 containerd[1489]: time="2025-05-13T23:49:25.742751700Z" level=info msg="RemoveContainer for \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" returns successfully" May 13 23:49:25.743169 kubelet[2597]: I0513 23:49:25.743116 2597 scope.go:117] "RemoveContainer" containerID="e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583" May 13 23:49:25.750455 containerd[1489]: time="2025-05-13T23:49:25.750136857Z" level=info msg="RemoveContainer for \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\"" May 13 23:49:25.755802 containerd[1489]: time="2025-05-13T23:49:25.755713944Z" level=info msg="RemoveContainer for \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" returns successfully" May 13 23:49:25.756421 kubelet[2597]: I0513 23:49:25.756244 2597 scope.go:117] "RemoveContainer" containerID="5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc" May 13 23:49:25.760434 containerd[1489]: time="2025-05-13T23:49:25.759942010Z" level=error msg="ContainerStatus for \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\": not found" May 13 23:49:25.768750 kubelet[2597]: E0513 23:49:25.768672 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\": not found" containerID="5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc" May 13 23:49:25.768947 kubelet[2597]: I0513 23:49:25.768738 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc"} err="failed to get container status \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\": rpc error: code = NotFound desc = an error occurred when try to find container \"5961503ba61bf6311f819d61d2c6bbbc3e05bfd0c2d3786bbfe7290652b85ecc\": not found" May 13 23:49:25.768947 kubelet[2597]: I0513 23:49:25.768850 2597 scope.go:117] "RemoveContainer" containerID="d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf" May 13 23:49:25.769285 containerd[1489]: time="2025-05-13T23:49:25.769208615Z" level=error msg="ContainerStatus for \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\": not found" May 13 23:49:25.769492 kubelet[2597]: E0513 23:49:25.769445 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\": not found" containerID="d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf" May 13 23:49:25.769552 kubelet[2597]: I0513 23:49:25.769482 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf"} err="failed to get container status \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d320b2e98ef4983694159ba926f4224211a6f2bde9d5f157a617ff8e4622a8bf\": not found" May 13 23:49:25.769552 kubelet[2597]: I0513 23:49:25.769510 2597 scope.go:117] "RemoveContainer" containerID="a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce" May 13 23:49:25.769717 containerd[1489]: time="2025-05-13T23:49:25.769665196Z" level=error msg="ContainerStatus for \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\": not found" May 13 23:49:25.769824 kubelet[2597]: E0513 23:49:25.769782 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\": not found" containerID="a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce" May 13 23:49:25.769824 kubelet[2597]: I0513 23:49:25.769812 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce"} err="failed to get container status \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"a01f2eddcf1d5fa467f190ab515608e822ab6e367273b9ba1d5c2abfdb3bd8ce\": not found" May 13 23:49:25.769909 kubelet[2597]: I0513 23:49:25.769833 2597 scope.go:117] "RemoveContainer" containerID="a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd" May 13 23:49:25.770051 containerd[1489]: time="2025-05-13T23:49:25.770002639Z" level=error msg="ContainerStatus for \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\": not found" May 13 23:49:25.770185 kubelet[2597]: E0513 23:49:25.770134 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\": not found" containerID="a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd" May 13 23:49:25.770232 kubelet[2597]: I0513 23:49:25.770172 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd"} err="failed to get container status \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"a28dec1db38e5f572df734decd4dd93f7c6acb99e8074389f679a90542e86dfd\": not found" May 13 23:49:25.770232 kubelet[2597]: I0513 23:49:25.770198 2597 scope.go:117] "RemoveContainer" containerID="e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583" May 13 23:49:25.770427 containerd[1489]: time="2025-05-13T23:49:25.770353579Z" level=error msg="ContainerStatus for \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\": not found" May 13 23:49:25.770543 kubelet[2597]: E0513 23:49:25.770501 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\": not found" containerID="e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583" May 13 23:49:25.770543 kubelet[2597]: I0513 23:49:25.770530 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583"} err="failed to get container status \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5320a9a011f4eeae7358dff175704627d66d87c4d4a0d95f84d13318193b583\": not found" May 13 23:49:25.770633 kubelet[2597]: I0513 23:49:25.770550 2597 scope.go:117] "RemoveContainer" containerID="990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853" May 13 23:49:25.778091 containerd[1489]: time="2025-05-13T23:49:25.777869245Z" level=info msg="RemoveContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\"" May 13 23:49:25.783700 containerd[1489]: time="2025-05-13T23:49:25.783614864Z" level=info msg="RemoveContainer for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" returns successfully" May 13 23:49:25.786356 kubelet[2597]: I0513 23:49:25.784153 2597 scope.go:117] "RemoveContainer" containerID="990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853" May 13 23:49:25.786356 kubelet[2597]: E0513 23:49:25.784842 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\": not found" containerID="990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853" May 13 23:49:25.786356 kubelet[2597]: I0513 23:49:25.784876 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853"} err="failed to get container status \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\": rpc error: code = NotFound desc = an error occurred when try to find container \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\": not found" May 13 23:49:25.786632 containerd[1489]: time="2025-05-13T23:49:25.784596517Z" level=error msg="ContainerStatus for \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"990e4f9158d6281c8859743481360e3c48d74c6bfffbef69037f67dad8209853\": not found" May 13 23:49:25.787514 kubelet[2597]: I0513 23:49:25.787462 2597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787767 kubelet[2597]: I0513 23:49:25.787716 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787767 kubelet[2597]: I0513 23:49:25.787744 2597 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787767 kubelet[2597]: I0513 23:49:25.787759 2597 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787767 kubelet[2597]: I0513 23:49:25.787772 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787787 2597 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d90859b-f43a-479f-baf3-89c1b7de86d7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787800 2597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fm9k6\" (UniqueName: \"kubernetes.io/projected/983710ce-433a-4547-b775-1367d88b1600-kube-api-access-fm9k6\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787814 2597 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787827 2597 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787839 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983710ce-433a-4547-b775-1367d88b1600-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787851 2597 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787863 2597 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.787942 kubelet[2597]: I0513 23:49:25.787880 2597 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.788350 kubelet[2597]: I0513 23:49:25.787895 2597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2f9rw\" (UniqueName: \"kubernetes.io/projected/6d90859b-f43a-479f-baf3-89c1b7de86d7-kube-api-access-2f9rw\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.788350 kubelet[2597]: I0513 23:49:25.787907 2597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.788350 kubelet[2597]: I0513 23:49:25.787918 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d90859b-f43a-479f-baf3-89c1b7de86d7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 23:49:25.956037 systemd[1]: Removed slice kubepods-burstable-pod6d90859b_f43a_479f_baf3_89c1b7de86d7.slice - libcontainer container kubepods-burstable-pod6d90859b_f43a_479f_baf3_89c1b7de86d7.slice. May 13 23:49:25.960672 systemd[1]: kubepods-burstable-pod6d90859b_f43a_479f_baf3_89c1b7de86d7.slice: Consumed 8.443s CPU time, 125.9M memory peak, 176K read from disk, 16.6M written to disk. May 13 23:49:25.987722 systemd[1]: Removed slice kubepods-besteffort-pod983710ce_433a_4547_b775_1367d88b1600.slice - libcontainer container kubepods-besteffort-pod983710ce_433a_4547_b775_1367d88b1600.slice. May 13 23:49:26.161218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a197a6d66eaa11ed8b2ac56edf214d10cd023e4ad487a7ded2c3b6e8600036dd-shm.mount: Deactivated successfully. May 13 23:49:26.161364 systemd[1]: var-lib-kubelet-pods-6d90859b\x2df43a\x2d479f\x2dbaf3\x2d89c1b7de86d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2f9rw.mount: Deactivated successfully. May 13 23:49:26.161486 systemd[1]: var-lib-kubelet-pods-983710ce\x2d433a\x2d4547\x2db775\x2d1367d88b1600-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfm9k6.mount: Deactivated successfully. May 13 23:49:26.161594 systemd[1]: var-lib-kubelet-pods-6d90859b\x2df43a\x2d479f\x2dbaf3\x2d89c1b7de86d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:49:26.161693 systemd[1]: var-lib-kubelet-pods-6d90859b\x2df43a\x2d479f\x2dbaf3\x2d89c1b7de86d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:49:26.866510 sshd[4234]: Connection closed by 10.0.0.1 port 43398 May 13 23:49:26.867653 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 13 23:49:26.895526 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:43398.service: Deactivated successfully. May 13 23:49:26.901336 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:49:26.908595 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. May 13 23:49:26.911684 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). May 13 23:49:26.917682 systemd-logind[1469]: Removed session 26. May 13 23:49:26.994934 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:26.995932 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:27.011227 systemd-logind[1469]: New session 27 of user core. May 13 23:49:27.017920 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:49:27.389970 kubelet[2597]: I0513 23:49:27.389883 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d90859b-f43a-479f-baf3-89c1b7de86d7" path="/var/lib/kubelet/pods/6d90859b-f43a-479f-baf3-89c1b7de86d7/volumes" May 13 23:49:27.391197 kubelet[2597]: I0513 23:49:27.391157 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983710ce-433a-4547-b775-1367d88b1600" path="/var/lib/kubelet/pods/983710ce-433a-4547-b775-1367d88b1600/volumes" May 13 23:49:28.043003 sshd[4385]: Connection closed by 10.0.0.1 port 39150 May 13 23:49:28.043461 sshd-session[4382]: pam_unix(sshd:session): session closed for user core May 13 23:49:28.060209 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:39150.service: Deactivated successfully. May 13 23:49:28.063183 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:49:28.073914 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). May 13 23:49:28.074499 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. May 13 23:49:28.078947 systemd-logind[1469]: Removed session 27. May 13 23:49:28.090053 kubelet[2597]: I0513 23:49:28.086216 2597 memory_manager.go:355] "RemoveStaleState removing state" podUID="983710ce-433a-4547-b775-1367d88b1600" containerName="cilium-operator" May 13 23:49:28.090053 kubelet[2597]: I0513 23:49:28.086257 2597 memory_manager.go:355] "RemoveStaleState removing state" podUID="6d90859b-f43a-479f-baf3-89c1b7de86d7" containerName="cilium-agent" May 13 23:49:28.114202 systemd[1]: Created slice kubepods-burstable-podeae29f56_e5cb_4eba_9094_43e2234c1e2a.slice - libcontainer container kubepods-burstable-podeae29f56_e5cb_4eba_9094_43e2234c1e2a.slice. May 13 23:49:28.157119 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:28.159621 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:28.171766 systemd-logind[1469]: New session 28 of user core. May 13 23:49:28.190766 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218577 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-etc-cni-netd\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218646 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-cilium-run\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218686 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-cni-path\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218705 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-host-proc-sys-net\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218727 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-hostproc\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220332 kubelet[2597]: I0513 23:49:28.218745 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-xtables-lock\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220664 kubelet[2597]: I0513 23:49:28.218763 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eae29f56-e5cb-4eba-9094-43e2234c1e2a-cilium-ipsec-secrets\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220664 kubelet[2597]: I0513 23:49:28.218783 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eae29f56-e5cb-4eba-9094-43e2234c1e2a-clustermesh-secrets\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220664 kubelet[2597]: I0513 23:49:28.218805 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drlxr\" (UniqueName: \"kubernetes.io/projected/eae29f56-e5cb-4eba-9094-43e2234c1e2a-kube-api-access-drlxr\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220664 kubelet[2597]: I0513 23:49:28.218826 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-cilium-cgroup\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220664 kubelet[2597]: I0513 23:49:28.218847 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-bpf-maps\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220904 kubelet[2597]: I0513 23:49:28.218868 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-host-proc-sys-kernel\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220904 kubelet[2597]: I0513 23:49:28.218890 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eae29f56-e5cb-4eba-9094-43e2234c1e2a-lib-modules\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220904 kubelet[2597]: I0513 23:49:28.218912 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eae29f56-e5cb-4eba-9094-43e2234c1e2a-cilium-config-path\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.220904 kubelet[2597]: I0513 23:49:28.218931 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eae29f56-e5cb-4eba-9094-43e2234c1e2a-hubble-tls\") pod \"cilium-st4xs\" (UID: \"eae29f56-e5cb-4eba-9094-43e2234c1e2a\") " pod="kube-system/cilium-st4xs" May 13 23:49:28.260780 sshd[4399]: Connection closed by 10.0.0.1 port 39156 May 13 23:49:28.259431 sshd-session[4396]: pam_unix(sshd:session): session closed for user core May 13 23:49:28.283995 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:39156.service: Deactivated successfully. May 13 23:49:28.286862 systemd[1]: session-28.scope: Deactivated successfully. May 13 23:49:28.289737 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. May 13 23:49:28.292183 systemd[1]: Started sshd@28-10.0.0.20:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). May 13 23:49:28.299711 systemd-logind[1469]: Removed session 28. May 13 23:49:28.301446 kubelet[2597]: I0513 23:49:28.301349 2597 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:49:28Z","lastTransitionTime":"2025-05-13T23:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:49:28.377551 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:49:28.379883 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:28.384123 kubelet[2597]: E0513 23:49:28.384070 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:28.391443 systemd-logind[1469]: New session 29 of user core. May 13 23:49:28.406085 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 23:49:28.424952 kubelet[2597]: E0513 23:49:28.424870 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:28.425647 containerd[1489]: time="2025-05-13T23:49:28.425566402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-st4xs,Uid:eae29f56-e5cb-4eba-9094-43e2234c1e2a,Namespace:kube-system,Attempt:0,}" May 13 23:49:28.456202 containerd[1489]: time="2025-05-13T23:49:28.456139218Z" level=info msg="connecting to shim 2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:28.495892 systemd[1]: Started cri-containerd-2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505.scope - libcontainer container 2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505. May 13 23:49:28.558279 containerd[1489]: time="2025-05-13T23:49:28.557698573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-st4xs,Uid:eae29f56-e5cb-4eba-9094-43e2234c1e2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\"" May 13 23:49:28.559777 kubelet[2597]: E0513 23:49:28.559397 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:28.562194 containerd[1489]: time="2025-05-13T23:49:28.561911219Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:49:28.586855 containerd[1489]: time="2025-05-13T23:49:28.586771171Z" level=info msg="Container fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:28.606120 containerd[1489]: time="2025-05-13T23:49:28.606044559Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\"" May 13 23:49:28.608237 containerd[1489]: time="2025-05-13T23:49:28.607022031Z" level=info msg="StartContainer for \"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\"" May 13 23:49:28.608237 containerd[1489]: time="2025-05-13T23:49:28.608173917Z" level=info msg="connecting to shim fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" protocol=ttrpc version=3 May 13 23:49:28.659973 systemd[1]: Started cri-containerd-fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1.scope - libcontainer container fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1. May 13 23:49:28.752431 containerd[1489]: time="2025-05-13T23:49:28.752357938Z" level=info msg="StartContainer for \"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\" returns successfully" May 13 23:49:28.769154 systemd[1]: cri-containerd-fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1.scope: Deactivated successfully. May 13 23:49:28.772945 containerd[1489]: time="2025-05-13T23:49:28.772697368Z" level=info msg="received exit event container_id:\"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\" id:\"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\" pid:4480 exited_at:{seconds:1747180168 nanos:771640895}" May 13 23:49:28.772945 containerd[1489]: time="2025-05-13T23:49:28.772772702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\" id:\"fefd2ebffb25d2cde9ad987eb0a6f6598df0d05f769dd0e2be48b153c42cc2b1\" pid:4480 exited_at:{seconds:1747180168 nanos:771640895}" May 13 23:49:29.672299 kubelet[2597]: E0513 23:49:29.668422 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:29.675910 containerd[1489]: time="2025-05-13T23:49:29.675850928Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:49:29.731933 containerd[1489]: time="2025-05-13T23:49:29.728510184Z" level=info msg="Container 026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:29.754982 containerd[1489]: time="2025-05-13T23:49:29.754906394Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\"" May 13 23:49:29.757365 containerd[1489]: time="2025-05-13T23:49:29.756053328Z" level=info msg="StartContainer for \"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\"" May 13 23:49:29.757365 containerd[1489]: time="2025-05-13T23:49:29.757054825Z" level=info msg="connecting to shim 026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" protocol=ttrpc version=3 May 13 23:49:29.831811 systemd[1]: Started cri-containerd-026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9.scope - libcontainer container 026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9. May 13 23:49:29.904651 containerd[1489]: time="2025-05-13T23:49:29.904581350Z" level=info msg="StartContainer for \"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\" returns successfully" May 13 23:49:29.922472 systemd[1]: cri-containerd-026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9.scope: Deactivated successfully. May 13 23:49:29.924110 containerd[1489]: time="2025-05-13T23:49:29.924067350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\" id:\"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\" pid:4526 exited_at:{seconds:1747180169 nanos:923730018}" May 13 23:49:29.924517 containerd[1489]: time="2025-05-13T23:49:29.924481539Z" level=info msg="received exit event container_id:\"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\" id:\"026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9\" pid:4526 exited_at:{seconds:1747180169 nanos:923730018}" May 13 23:49:29.976921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026037933c86e808f8dc07d29e501b2851d83ef8a2ee467462b1a2cfde8ed8f9-rootfs.mount: Deactivated successfully. May 13 23:49:30.508792 kubelet[2597]: E0513 23:49:30.508723 2597 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:49:30.680011 kubelet[2597]: E0513 23:49:30.679742 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:30.705840 containerd[1489]: time="2025-05-13T23:49:30.705696522Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:49:30.771664 containerd[1489]: time="2025-05-13T23:49:30.763663306Z" level=info msg="Container de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:30.794592 containerd[1489]: time="2025-05-13T23:49:30.793802392Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\"" May 13 23:49:30.796688 containerd[1489]: time="2025-05-13T23:49:30.794987810Z" level=info msg="StartContainer for \"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\"" May 13 23:49:30.797851 containerd[1489]: time="2025-05-13T23:49:30.797736262Z" level=info msg="connecting to shim de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" protocol=ttrpc version=3 May 13 23:49:30.845765 systemd[1]: Started cri-containerd-de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb.scope - libcontainer container de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb. May 13 23:49:30.959024 containerd[1489]: time="2025-05-13T23:49:30.958940713Z" level=info msg="StartContainer for \"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\" returns successfully" May 13 23:49:30.974009 systemd[1]: cri-containerd-de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb.scope: Deactivated successfully. May 13 23:49:30.980968 containerd[1489]: time="2025-05-13T23:49:30.976663302Z" level=info msg="received exit event container_id:\"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\" id:\"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\" pid:4571 exited_at:{seconds:1747180170 nanos:975290578}" May 13 23:49:30.980968 containerd[1489]: time="2025-05-13T23:49:30.977519452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\" id:\"de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb\" pid:4571 exited_at:{seconds:1747180170 nanos:975290578}" May 13 23:49:31.063556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de418785807095a3d0b78933ef28fc219b4bea1680e88d7216596ca38e8791eb-rootfs.mount: Deactivated successfully. May 13 23:49:31.691092 kubelet[2597]: E0513 23:49:31.688634 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:31.702877 containerd[1489]: time="2025-05-13T23:49:31.702817237Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:49:31.722000 containerd[1489]: time="2025-05-13T23:49:31.721670856Z" level=info msg="Container aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:31.750267 containerd[1489]: time="2025-05-13T23:49:31.750191042Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\"" May 13 23:49:31.752212 containerd[1489]: time="2025-05-13T23:49:31.752149849Z" level=info msg="StartContainer for \"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\"" May 13 23:49:31.753362 containerd[1489]: time="2025-05-13T23:49:31.753315568Z" level=info msg="connecting to shim aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" protocol=ttrpc version=3 May 13 23:49:31.791763 systemd[1]: Started cri-containerd-aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0.scope - libcontainer container aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0. May 13 23:49:31.854300 systemd[1]: cri-containerd-aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0.scope: Deactivated successfully. May 13 23:49:31.855257 containerd[1489]: time="2025-05-13T23:49:31.855200421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\" id:\"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\" pid:4611 exited_at:{seconds:1747180171 nanos:854731830}" May 13 23:49:31.940453 containerd[1489]: time="2025-05-13T23:49:31.940261579Z" level=info msg="received exit event container_id:\"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\" id:\"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\" pid:4611 exited_at:{seconds:1747180171 nanos:854731830}" May 13 23:49:31.943888 containerd[1489]: time="2025-05-13T23:49:31.943377168Z" level=info msg="StartContainer for \"aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0\" returns successfully" May 13 23:49:32.029060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa15429a42ae51df2d76b7e8c7271302181e4467a49d106ce52fe9e625d7fcb0-rootfs.mount: Deactivated successfully. May 13 23:49:32.703672 kubelet[2597]: E0513 23:49:32.702074 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:32.706995 containerd[1489]: time="2025-05-13T23:49:32.706259553Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:49:32.745263 containerd[1489]: time="2025-05-13T23:49:32.745188030Z" level=info msg="Container e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:32.783822 containerd[1489]: time="2025-05-13T23:49:32.782635982Z" level=info msg="CreateContainer within sandbox \"2c87ed84235de0b1dd97436a0873916bdc4b87f512586eb94e44cc3f4d398505\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\"" May 13 23:49:32.784910 containerd[1489]: time="2025-05-13T23:49:32.784843302Z" level=info msg="StartContainer for \"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\"" May 13 23:49:32.786267 containerd[1489]: time="2025-05-13T23:49:32.786224910Z" level=info msg="connecting to shim e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe" address="unix:///run/containerd/s/645eb84821ca86c21d6044428e0c4a1b81887c736dbb3522a407e38438e32d1f" protocol=ttrpc version=3 May 13 23:49:32.834781 systemd[1]: Started cri-containerd-e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe.scope - libcontainer container e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe. May 13 23:49:32.902601 containerd[1489]: time="2025-05-13T23:49:32.902507317Z" level=info msg="StartContainer for \"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" returns successfully" May 13 23:49:33.029054 containerd[1489]: time="2025-05-13T23:49:33.028377628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"d7de397ce472c0d5739f97ee25335d1de8db22e0ffa3ed03a0fa2153ddfd8a0a\" pid:4680 exited_at:{seconds:1747180173 nanos:27713174}" May 13 23:49:33.549698 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 23:49:33.719643 kubelet[2597]: E0513 23:49:33.718342 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:33.746869 kubelet[2597]: I0513 23:49:33.745941 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-st4xs" podStartSLOduration=5.745919123 podStartE2EDuration="5.745919123s" podCreationTimestamp="2025-05-13 23:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:33.744651623 +0000 UTC m=+98.441874644" watchObservedRunningTime="2025-05-13 23:49:33.745919123 +0000 UTC m=+98.443142125" May 13 23:49:34.721517 kubelet[2597]: E0513 23:49:34.721466 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:35.033952 containerd[1489]: time="2025-05-13T23:49:35.033785984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"9d157b76e10e89ad9d305c4d76bf85c9b813c34a9f0128a8dec8aeb8a3f66e84\" pid:4787 exit_status:1 exited_at:{seconds:1747180175 nanos:33358160}" May 13 23:49:35.723517 kubelet[2597]: E0513 23:49:35.723462 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:37.178022 containerd[1489]: time="2025-05-13T23:49:37.177955727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"f662929a89514245171f36498b1324d0668f60331298eefe19cad996384d6fc7\" pid:5182 exit_status:1 exited_at:{seconds:1747180177 nanos:177590373}" May 13 23:49:37.262968 systemd-networkd[1420]: lxc_health: Link UP May 13 23:49:37.271777 systemd-networkd[1420]: lxc_health: Gained carrier May 13 23:49:38.426316 kubelet[2597]: E0513 23:49:38.426273 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:38.727582 systemd-networkd[1420]: lxc_health: Gained IPv6LL May 13 23:49:38.731967 kubelet[2597]: E0513 23:49:38.731917 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:39.408623 containerd[1489]: time="2025-05-13T23:49:39.408521753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"603affea97dd76ef901e230cc22111d9f1e85d11ffc5c0969766a9f01dc884dc\" pid:5272 exited_at:{seconds:1747180179 nanos:407982429}" May 13 23:49:39.733744 kubelet[2597]: E0513 23:49:39.733596 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:49:41.522662 containerd[1489]: time="2025-05-13T23:49:41.521560028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"608a25aa2c1cce3b77c34480c265a5eadfb03a3a49de9b3709b2bf592bc98dad\" pid:5306 exited_at:{seconds:1747180181 nanos:521099354}" May 13 23:49:43.642538 containerd[1489]: time="2025-05-13T23:49:43.642476788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2675bb3a6ae2b4d6be0328d75c054dbd2be99038ef9c9b67c9cb5e4bd8f28fe\" id:\"1b716c4810d361ed4f7d9feb87930003db18eb7596b4e8344551e03cb5214cc1\" pid:5329 exited_at:{seconds:1747180183 nanos:642095045}" May 13 23:49:43.656960 sshd[4414]: Connection closed by 10.0.0.1 port 39172 May 13 23:49:43.657448 sshd-session[4406]: pam_unix(sshd:session): session closed for user core May 13 23:49:43.661319 systemd[1]: sshd@28-10.0.0.20:22-10.0.0.1:39172.service: Deactivated successfully. May 13 23:49:43.663522 systemd[1]: session-29.scope: Deactivated successfully. May 13 23:49:43.664345 systemd-logind[1469]: Session 29 logged out. Waiting for processes to exit. May 13 23:49:43.665258 systemd-logind[1469]: Removed session 29.