May 27 17:48:18.890277 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:48:18.890321 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:48:18.890333 kernel: BIOS-provided physical RAM map: May 27 17:48:18.890340 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:48:18.890346 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:48:18.890353 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:48:18.890361 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:48:18.890367 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 27 17:48:18.890373 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:48:18.890389 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:48:18.890403 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:48:18.890420 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:48:18.890433 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:48:18.890441 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:48:18.890464 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:48:18.890471 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:48:18.890492 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:48:18.890500 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:48:18.890507 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:48:18.890514 kernel: NX (Execute Disable) protection: active May 27 17:48:18.890521 kernel: APIC: Static calls initialized May 27 17:48:18.890528 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable May 27 17:48:18.890535 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable May 27 17:48:18.890542 kernel: extended physical RAM map: May 27 17:48:18.890549 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:48:18.890556 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:48:18.890563 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:48:18.890572 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:48:18.890579 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable May 27 17:48:18.890586 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable May 27 17:48:18.890593 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable May 27 17:48:18.890600 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable May 27 17:48:18.890607 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable May 27 17:48:18.890614 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:48:18.890621 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:48:18.890628 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:48:18.890635 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:48:18.890642 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:48:18.890652 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:48:18.890661 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:48:18.890675 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:48:18.890684 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:48:18.890694 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:48:18.890704 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:48:18.890714 kernel: efi: EFI v2.7 by EDK II May 27 17:48:18.890721 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 May 27 17:48:18.890728 kernel: random: crng init done May 27 17:48:18.890736 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 17:48:18.890743 kernel: secureboot: Secure boot enabled May 27 17:48:18.890750 kernel: SMBIOS 2.8 present. May 27 17:48:18.890757 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 17:48:18.890765 kernel: DMI: Memory slots populated: 1/1 May 27 17:48:18.890772 kernel: Hypervisor detected: KVM May 27 17:48:18.890779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:48:18.890786 kernel: kvm-clock: using sched offset of 5764676992 cycles May 27 17:48:18.890796 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:48:18.890804 kernel: tsc: Detected 2794.748 MHz processor May 27 17:48:18.890811 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:48:18.890819 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:48:18.890826 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 27 17:48:18.890834 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:48:18.890841 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:48:18.890849 kernel: Using GB pages for direct mapping May 27 17:48:18.890856 kernel: ACPI: Early table checksum verification disabled May 27 17:48:18.890880 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 27 17:48:18.890893 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 17:48:18.890901 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890909 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890916 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 27 17:48:18.890924 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890931 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890939 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890946 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:48:18.890956 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 17:48:18.890963 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 27 17:48:18.890970 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] May 27 17:48:18.890978 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 27 17:48:18.890985 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 27 17:48:18.890993 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 27 17:48:18.891000 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 27 17:48:18.891007 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 27 17:48:18.891015 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 27 17:48:18.891025 kernel: No NUMA configuration found May 27 17:48:18.891035 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 27 17:48:18.891044 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] May 27 17:48:18.891054 kernel: Zone ranges: May 27 17:48:18.891064 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:48:18.891074 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 27 17:48:18.891084 kernel: Normal empty May 27 17:48:18.891093 kernel: Device empty May 27 17:48:18.891100 kernel: Movable zone start for each node May 27 17:48:18.891118 kernel: Early memory node ranges May 27 17:48:18.891129 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 27 17:48:18.891138 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 27 17:48:18.891147 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 27 17:48:18.891157 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 27 17:48:18.891166 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 27 17:48:18.891174 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 27 17:48:18.891181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:48:18.891188 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 27 17:48:18.891196 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:48:18.891207 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 17:48:18.891214 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 17:48:18.891222 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 27 17:48:18.891229 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:48:18.891236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:48:18.891244 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:48:18.891251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:48:18.891258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:48:18.891266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:48:18.891275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:48:18.891283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:48:18.891290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:48:18.891297 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:48:18.891304 kernel: TSC deadline timer available May 27 17:48:18.891312 kernel: CPU topo: Max. logical packages: 1 May 27 17:48:18.891319 kernel: CPU topo: Max. logical dies: 1 May 27 17:48:18.891327 kernel: CPU topo: Max. dies per package: 1 May 27 17:48:18.891342 kernel: CPU topo: Max. threads per core: 1 May 27 17:48:18.891350 kernel: CPU topo: Num. cores per package: 4 May 27 17:48:18.891357 kernel: CPU topo: Num. threads per package: 4 May 27 17:48:18.891365 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 17:48:18.891374 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:48:18.891382 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 17:48:18.891390 kernel: kvm-guest: setup PV sched yield May 27 17:48:18.891398 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 17:48:18.891405 kernel: Booting paravirtualized kernel on KVM May 27 17:48:18.891415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:48:18.891423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 17:48:18.891431 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 17:48:18.891439 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 17:48:18.891446 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 17:48:18.891454 kernel: kvm-guest: PV spinlocks enabled May 27 17:48:18.891461 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:48:18.891470 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:48:18.891480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:48:18.891488 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:48:18.891496 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:48:18.891504 kernel: Fallback order for Node 0: 0 May 27 17:48:18.891511 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 May 27 17:48:18.891519 kernel: Policy zone: DMA32 May 27 17:48:18.891527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:48:18.891534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:48:18.891548 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:48:18.891561 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:48:18.891569 kernel: Dynamic Preempt: voluntary May 27 17:48:18.891583 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:48:18.891602 kernel: rcu: RCU event tracing is enabled. May 27 17:48:18.891617 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:48:18.891625 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:48:18.891632 kernel: Rude variant of Tasks RCU enabled. May 27 17:48:18.891640 kernel: Tracing variant of Tasks RCU enabled. May 27 17:48:18.891648 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:48:18.891657 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:48:18.891666 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:48:18.891673 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:48:18.891681 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:48:18.891689 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 17:48:18.891696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:48:18.891704 kernel: Console: colour dummy device 80x25 May 27 17:48:18.891712 kernel: printk: legacy console [ttyS0] enabled May 27 17:48:18.891720 kernel: ACPI: Core revision 20240827 May 27 17:48:18.891730 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:48:18.891737 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:48:18.891745 kernel: x2apic enabled May 27 17:48:18.891753 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:48:18.891760 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 17:48:18.891768 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 17:48:18.891776 kernel: kvm-guest: setup PV IPIs May 27 17:48:18.891783 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:48:18.891791 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:48:18.891801 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 17:48:18.891809 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:48:18.891817 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 17:48:18.891824 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 17:48:18.891832 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:48:18.891840 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:48:18.891847 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:48:18.891855 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 17:48:18.891876 kernel: RETBleed: Mitigation: untrained return thunk May 27 17:48:18.891896 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:48:18.891904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:48:18.891912 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 17:48:18.891929 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 17:48:18.891938 kernel: x86/bugs: return thunk changed May 27 17:48:18.891955 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 17:48:18.891966 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:48:18.891976 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:48:18.891989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:48:18.891999 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:48:18.892007 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 17:48:18.892015 kernel: Freeing SMP alternatives memory: 32K May 27 17:48:18.892022 kernel: pid_max: default: 32768 minimum: 301 May 27 17:48:18.892030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:48:18.892038 kernel: landlock: Up and running. May 27 17:48:18.892045 kernel: SELinux: Initializing. May 27 17:48:18.892053 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:48:18.892063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:48:18.892071 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 17:48:18.892079 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 17:48:18.892086 kernel: ... version: 0 May 27 17:48:18.892094 kernel: ... bit width: 48 May 27 17:48:18.892101 kernel: ... generic registers: 6 May 27 17:48:18.892109 kernel: ... value mask: 0000ffffffffffff May 27 17:48:18.892117 kernel: ... max period: 00007fffffffffff May 27 17:48:18.892124 kernel: ... fixed-purpose events: 0 May 27 17:48:18.892134 kernel: ... event mask: 000000000000003f May 27 17:48:18.892141 kernel: signal: max sigframe size: 1776 May 27 17:48:18.892149 kernel: rcu: Hierarchical SRCU implementation. May 27 17:48:18.892157 kernel: rcu: Max phase no-delay instances is 400. May 27 17:48:18.892165 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:48:18.892172 kernel: smp: Bringing up secondary CPUs ... May 27 17:48:18.892180 kernel: smpboot: x86: Booting SMP configuration: May 27 17:48:18.892187 kernel: .... node #0, CPUs: #1 #2 #3 May 27 17:48:18.892195 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:48:18.892205 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 17:48:18.892213 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137064K reserved, 0K cma-reserved) May 27 17:48:18.892221 kernel: devtmpfs: initialized May 27 17:48:18.892228 kernel: x86/mm: Memory block size: 128MB May 27 17:48:18.892236 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 27 17:48:18.892244 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 27 17:48:18.892252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:48:18.892260 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:48:18.892267 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:48:18.892277 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:48:18.892284 kernel: audit: initializing netlink subsys (disabled) May 27 17:48:18.892292 kernel: audit: type=2000 audit(1748368096.064:1): state=initialized audit_enabled=0 res=1 May 27 17:48:18.892300 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:48:18.892308 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:48:18.892315 kernel: cpuidle: using governor menu May 27 17:48:18.892323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:48:18.892330 kernel: dca service started, version 1.12.1 May 27 17:48:18.892338 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 17:48:18.892348 kernel: PCI: Using configuration type 1 for base access May 27 17:48:18.892356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:48:18.892363 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:48:18.892371 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:48:18.892379 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:48:18.892386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:48:18.892394 kernel: ACPI: Added _OSI(Module Device) May 27 17:48:18.892401 kernel: ACPI: Added _OSI(Processor Device) May 27 17:48:18.892409 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:48:18.892419 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:48:18.892426 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:48:18.892434 kernel: ACPI: Interpreter enabled May 27 17:48:18.892441 kernel: ACPI: PM: (supports S0 S5) May 27 17:48:18.892449 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:48:18.892457 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:48:18.892464 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:48:18.892472 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 17:48:18.892480 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:48:18.892666 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:48:18.892812 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 17:48:18.892993 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 17:48:18.893006 kernel: PCI host bridge to bus 0000:00 May 27 17:48:18.893156 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:48:18.893347 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:48:18.893505 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:48:18.893613 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 17:48:18.893736 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 17:48:18.893853 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 17:48:18.894001 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:48:18.894172 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 17:48:18.894360 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:48:18.894483 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 17:48:18.894625 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 17:48:18.894746 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 17:48:18.894877 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:48:18.895017 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:48:18.895160 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 17:48:18.895289 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 17:48:18.895405 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 17:48:18.895530 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:48:18.895678 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 17:48:18.895802 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 17:48:18.895944 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 17:48:18.896070 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:48:18.896191 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 17:48:18.896307 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 17:48:18.896420 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 17:48:18.896535 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 17:48:18.896661 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 17:48:18.896802 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 17:48:18.896970 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 17:48:18.897094 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 17:48:18.897208 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 17:48:18.897348 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 17:48:18.897500 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 17:48:18.897513 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:48:18.897521 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:48:18.897529 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:48:18.897541 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:48:18.897548 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 17:48:18.897556 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 17:48:18.897564 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 17:48:18.897572 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 17:48:18.897580 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 17:48:18.897587 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 17:48:18.897595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 17:48:18.897603 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 17:48:18.897616 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 17:48:18.897627 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 17:48:18.897637 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 17:48:18.897645 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 17:48:18.897653 kernel: iommu: Default domain type: Translated May 27 17:48:18.897660 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:48:18.897668 kernel: efivars: Registered efivars operations May 27 17:48:18.897676 kernel: PCI: Using ACPI for IRQ routing May 27 17:48:18.897683 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:48:18.897693 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 27 17:48:18.897701 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] May 27 17:48:18.897708 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] May 27 17:48:18.897718 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 27 17:48:18.897728 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 27 17:48:18.897903 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 17:48:18.898026 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 17:48:18.898150 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:48:18.898160 kernel: vgaarb: loaded May 27 17:48:18.898172 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:48:18.898180 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:48:18.898188 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:48:18.898196 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:48:18.898204 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:48:18.898211 kernel: pnp: PnP ACPI init May 27 17:48:18.898364 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 17:48:18.898376 kernel: pnp: PnP ACPI: found 6 devices May 27 17:48:18.898387 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:48:18.898395 kernel: NET: Registered PF_INET protocol family May 27 17:48:18.898402 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:48:18.898410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:48:18.898418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:48:18.898426 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:48:18.898434 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:48:18.898446 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:48:18.898454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:48:18.898464 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:48:18.898472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:48:18.898480 kernel: NET: Registered PF_XDP protocol family May 27 17:48:18.898606 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 17:48:18.898734 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 17:48:18.898844 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:48:18.899019 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:48:18.899165 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:48:18.899299 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 17:48:18.899428 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 17:48:18.899550 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 17:48:18.899561 kernel: PCI: CLS 0 bytes, default 64 May 27 17:48:18.899569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:48:18.899577 kernel: Initialise system trusted keyrings May 27 17:48:18.899585 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:48:18.899593 kernel: Key type asymmetric registered May 27 17:48:18.899604 kernel: Asymmetric key parser 'x509' registered May 27 17:48:18.899627 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:48:18.899637 kernel: io scheduler mq-deadline registered May 27 17:48:18.899645 kernel: io scheduler kyber registered May 27 17:48:18.899656 kernel: io scheduler bfq registered May 27 17:48:18.899664 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:48:18.899672 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 17:48:18.899681 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 17:48:18.899689 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 17:48:18.899697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:48:18.899707 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:48:18.899716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:48:18.899724 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:48:18.899732 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:48:18.899853 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 17:48:18.899894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:48:18.900049 kernel: rtc_cmos 00:04: registered as rtc0 May 27 17:48:18.900193 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T17:48:18 UTC (1748368098) May 27 17:48:18.900337 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 17:48:18.900353 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 17:48:18.900363 kernel: efifb: probing for efifb May 27 17:48:18.900374 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 17:48:18.900384 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 17:48:18.900394 kernel: efifb: scrolling: redraw May 27 17:48:18.900404 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:48:18.900415 kernel: Console: switching to colour frame buffer device 160x50 May 27 17:48:18.900430 kernel: fb0: EFI VGA frame buffer device May 27 17:48:18.900441 kernel: pstore: Using crash dump compression: deflate May 27 17:48:18.900455 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:48:18.900465 kernel: NET: Registered PF_INET6 protocol family May 27 17:48:18.900473 kernel: Segment Routing with IPv6 May 27 17:48:18.900484 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:48:18.900497 kernel: NET: Registered PF_PACKET protocol family May 27 17:48:18.900507 kernel: Key type dns_resolver registered May 27 17:48:18.900518 kernel: IPI shorthand broadcast: enabled May 27 17:48:18.900529 kernel: sched_clock: Marking stable (3271003527, 183834503)->(3476175530, -21337500) May 27 17:48:18.900539 kernel: registered taskstats version 1 May 27 17:48:18.900550 kernel: Loading compiled-in X.509 certificates May 27 17:48:18.900559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:48:18.900568 kernel: Demotion targets for Node 0: null May 27 17:48:18.900576 kernel: Key type .fscrypt registered May 27 17:48:18.900587 kernel: Key type fscrypt-provisioning registered May 27 17:48:18.900595 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:48:18.900603 kernel: ima: Allocated hash algorithm: sha1 May 27 17:48:18.900611 kernel: ima: No architecture policies found May 27 17:48:18.900619 kernel: clk: Disabling unused clocks May 27 17:48:18.900627 kernel: Warning: unable to open an initial console. May 27 17:48:18.900635 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:48:18.900644 kernel: Write protecting the kernel read-only data: 24576k May 27 17:48:18.900652 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:48:18.900662 kernel: Run /init as init process May 27 17:48:18.900670 kernel: with arguments: May 27 17:48:18.900678 kernel: /init May 27 17:48:18.900686 kernel: with environment: May 27 17:48:18.900694 kernel: HOME=/ May 27 17:48:18.900702 kernel: TERM=linux May 27 17:48:18.900710 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:48:18.900720 systemd[1]: Successfully made /usr/ read-only. May 27 17:48:18.900734 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:48:18.900743 systemd[1]: Detected virtualization kvm. May 27 17:48:18.900751 systemd[1]: Detected architecture x86-64. May 27 17:48:18.900760 systemd[1]: Running in initrd. May 27 17:48:18.900768 systemd[1]: No hostname configured, using default hostname. May 27 17:48:18.900777 systemd[1]: Hostname set to . May 27 17:48:18.900786 systemd[1]: Initializing machine ID from VM UUID. May 27 17:48:18.900796 systemd[1]: Queued start job for default target initrd.target. May 27 17:48:18.900805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:48:18.900814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:48:18.900823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:48:18.900832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:48:18.900841 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:48:18.900851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:48:18.900912 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:48:18.900922 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:48:18.900931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:48:18.900940 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:48:18.900948 systemd[1]: Reached target paths.target - Path Units. May 27 17:48:18.900957 systemd[1]: Reached target slices.target - Slice Units. May 27 17:48:18.900965 systemd[1]: Reached target swap.target - Swaps. May 27 17:48:18.900974 systemd[1]: Reached target timers.target - Timer Units. May 27 17:48:18.900983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:48:18.900994 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:48:18.901002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:48:18.901011 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:48:18.901020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:48:18.901029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:48:18.901038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:48:18.901046 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:48:18.901055 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:48:18.901066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:48:18.901074 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:48:18.901084 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:48:18.901092 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:48:18.901101 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:48:18.901110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:48:18.901119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:18.901127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:48:18.901139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:48:18.901147 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:48:18.901156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:48:18.901199 systemd-journald[219]: Collecting audit messages is disabled. May 27 17:48:18.901229 systemd-journald[219]: Journal started May 27 17:48:18.901251 systemd-journald[219]: Runtime Journal (/run/log/journal/cb29f0e7717d4f42bcdcc7fca767c610) is 6M, max 48.2M, 42.2M free. May 27 17:48:18.891993 systemd-modules-load[223]: Inserted module 'overlay' May 27 17:48:18.904916 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:48:18.909024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:48:18.912465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:48:18.922909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:48:18.924898 kernel: Bridge firewalling registered May 27 17:48:18.924877 systemd-modules-load[223]: Inserted module 'br_netfilter' May 27 17:48:18.926960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:48:18.928859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:48:18.930502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:18.937040 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:48:18.945665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:48:18.950003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:48:18.951303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:48:18.952436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:48:18.965860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:48:18.967999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:48:18.978438 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:48:18.981196 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:48:19.013669 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:48:19.017298 systemd-resolved[258]: Positive Trust Anchors: May 27 17:48:19.017306 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:48:19.017342 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:48:19.022333 systemd-resolved[258]: Defaulting to hostname 'linux'. May 27 17:48:19.030207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:48:19.030717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:48:19.143921 kernel: SCSI subsystem initialized May 27 17:48:19.155898 kernel: Loading iSCSI transport class v2.0-870. May 27 17:48:19.173490 kernel: iscsi: registered transport (tcp) May 27 17:48:19.201919 kernel: iscsi: registered transport (qla4xxx) May 27 17:48:19.201992 kernel: QLogic iSCSI HBA Driver May 27 17:48:19.228139 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:48:19.263552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:48:19.265741 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:48:19.340014 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:48:19.343124 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:48:19.435932 kernel: raid6: avx2x4 gen() 29039 MB/s May 27 17:48:19.458930 kernel: raid6: avx2x2 gen() 29055 MB/s May 27 17:48:19.476139 kernel: raid6: avx2x1 gen() 21339 MB/s May 27 17:48:19.476214 kernel: raid6: using algorithm avx2x2 gen() 29055 MB/s May 27 17:48:19.494085 kernel: raid6: .... xor() 19502 MB/s, rmw enabled May 27 17:48:19.494168 kernel: raid6: using avx2x2 recovery algorithm May 27 17:48:19.515937 kernel: xor: automatically using best checksumming function avx May 27 17:48:19.715910 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:48:19.724446 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:48:19.728308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:48:19.758518 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 27 17:48:19.763822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:48:19.765325 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:48:19.792905 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation May 27 17:48:19.821764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:48:19.824395 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:48:19.895579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:48:19.899646 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:48:19.933913 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 17:48:19.941075 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:48:19.948143 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:48:19.948258 kernel: GPT:9289727 != 19775487 May 27 17:48:19.948273 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:48:19.948283 kernel: GPT:9289727 != 19775487 May 27 17:48:19.948293 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:48:19.948309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:48:19.966910 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 17:48:19.966968 kernel: libata version 3.00 loaded. May 27 17:48:19.972004 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:48:19.975875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:48:19.976059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:19.978715 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:19.985183 kernel: AES CTR mode by8 optimization enabled May 27 17:48:19.980325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:19.983406 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:48:19.995366 kernel: ahci 0000:00:1f.2: version 3.0 May 27 17:48:19.995609 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 17:48:19.995561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:48:19.999943 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 17:48:20.000356 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 17:48:19.995678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:20.007336 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 17:48:20.000255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:20.016883 kernel: scsi host0: ahci May 27 17:48:20.019882 kernel: scsi host1: ahci May 27 17:48:20.021884 kernel: scsi host2: ahci May 27 17:48:20.024887 kernel: scsi host3: ahci May 27 17:48:20.027883 kernel: scsi host4: ahci May 27 17:48:20.029148 kernel: scsi host5: ahci May 27 17:48:20.030572 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 17:48:20.030598 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 17:48:20.032103 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 17:48:20.032605 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 17:48:20.033896 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 17:48:20.033950 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 17:48:20.038891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:48:20.042394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:20.060228 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:48:20.067604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:48:20.068139 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:48:20.077016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:48:20.079636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:48:20.111443 disk-uuid[640]: Primary Header is updated. May 27 17:48:20.111443 disk-uuid[640]: Secondary Entries is updated. May 27 17:48:20.111443 disk-uuid[640]: Secondary Header is updated. May 27 17:48:20.115894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:48:20.119883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:48:20.347431 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 17:48:20.347509 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 17:48:20.347520 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 17:48:20.347530 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 17:48:20.348892 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 17:48:20.351159 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 17:48:20.351243 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 17:48:20.351254 kernel: ata3.00: applying bridge limits May 27 17:48:20.352252 kernel: ata3.00: configured for UDMA/100 May 27 17:48:20.352895 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 17:48:20.408921 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 17:48:20.409280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:48:20.421980 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 17:48:20.811206 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:48:20.812357 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:48:20.814100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:48:20.814401 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:48:20.820301 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:48:20.851742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:48:21.161914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:48:21.162655 disk-uuid[641]: The operation has completed successfully. May 27 17:48:21.198458 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:48:21.198599 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:48:21.231290 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:48:21.254168 sh[669]: Success May 27 17:48:21.275652 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:48:21.275708 kernel: device-mapper: uevent: version 1.0.3 May 27 17:48:21.275720 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:48:21.283905 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:48:21.316765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:48:21.320288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:48:21.340194 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:48:21.348400 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:48:21.348432 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (681) May 27 17:48:21.348898 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:48:21.351434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:48:21.351459 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:48:21.355684 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:48:21.356656 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:48:21.357811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:48:21.359666 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:48:21.363003 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:48:21.403632 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (717) May 27 17:48:21.403678 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:48:21.403694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:48:21.404567 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:48:21.411897 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:48:21.413082 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:48:21.415598 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:48:21.503342 ignition[759]: Ignition 2.21.0 May 27 17:48:21.503356 ignition[759]: Stage: fetch-offline May 27 17:48:21.503391 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 27 17:48:21.504869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:48:21.503400 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:21.507891 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:48:21.503489 ignition[759]: parsed url from cmdline: "" May 27 17:48:21.503492 ignition[759]: no config URL provided May 27 17:48:21.503498 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:48:21.503507 ignition[759]: no config at "/usr/lib/ignition/user.ign" May 27 17:48:21.503530 ignition[759]: op(1): [started] loading QEMU firmware config module May 27 17:48:21.503535 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:48:21.512772 ignition[759]: op(1): [finished] loading QEMU firmware config module May 27 17:48:21.550058 systemd-networkd[857]: lo: Link UP May 27 17:48:21.550072 systemd-networkd[857]: lo: Gained carrier May 27 17:48:21.551636 systemd-networkd[857]: Enumeration completed May 27 17:48:21.551850 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:48:21.552064 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:48:21.552069 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:48:21.553089 systemd-networkd[857]: eth0: Link UP May 27 17:48:21.553095 systemd-networkd[857]: eth0: Gained carrier May 27 17:48:21.553109 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:48:21.554883 systemd[1]: Reached target network.target - Network. May 27 17:48:21.570410 ignition[759]: parsing config with SHA512: 0ead575aa19c24f2fefda9f46a5a778155258f87ece49c8c4101fb072fecaff82a0f7cf3cdca05eac8011d90d68fe26fe680923a75dbe26d5e4009838a284d92 May 27 17:48:21.574950 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:48:21.576309 unknown[759]: fetched base config from "system" May 27 17:48:21.576318 unknown[759]: fetched user config from "qemu" May 27 17:48:21.576706 ignition[759]: fetch-offline: fetch-offline passed May 27 17:48:21.576762 ignition[759]: Ignition finished successfully May 27 17:48:21.581562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:48:21.588123 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:48:21.589042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:48:21.619692 ignition[863]: Ignition 2.21.0 May 27 17:48:21.619705 ignition[863]: Stage: kargs May 27 17:48:21.619895 ignition[863]: no configs at "/usr/lib/ignition/base.d" May 27 17:48:21.619907 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:21.620535 ignition[863]: kargs: kargs passed May 27 17:48:21.620579 ignition[863]: Ignition finished successfully May 27 17:48:21.624963 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:48:21.626480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:48:21.659237 ignition[871]: Ignition 2.21.0 May 27 17:48:21.659253 ignition[871]: Stage: disks May 27 17:48:21.659388 ignition[871]: no configs at "/usr/lib/ignition/base.d" May 27 17:48:21.659399 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:21.661841 ignition[871]: disks: disks passed May 27 17:48:21.661950 ignition[871]: Ignition finished successfully May 27 17:48:21.665314 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:48:21.665842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:48:21.668405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:48:21.668715 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:48:21.669068 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:48:21.674625 systemd[1]: Reached target basic.target - Basic System. May 27 17:48:21.676086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:48:21.703074 systemd-resolved[258]: Detected conflict on linux IN A 10.0.0.132 May 27 17:48:21.703088 systemd-resolved[258]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 27 17:48:21.705117 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:48:21.872675 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:48:21.875288 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:48:21.984924 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:48:21.985973 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:48:21.986999 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:48:21.990633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:48:21.992637 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:48:21.993982 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:48:21.994032 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:48:21.994063 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:48:22.009554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:48:22.012231 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:48:22.018802 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (889) May 27 17:48:22.018848 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:48:22.018973 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:48:22.020907 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:48:22.025534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:48:22.053195 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:48:22.058852 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory May 27 17:48:22.063493 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:48:22.069172 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:48:22.167123 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:48:22.168480 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:48:22.171473 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:48:22.191892 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:48:22.203400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:48:22.218880 ignition[1003]: INFO : Ignition 2.21.0 May 27 17:48:22.220248 ignition[1003]: INFO : Stage: mount May 27 17:48:22.221874 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:48:22.221874 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:22.225157 ignition[1003]: INFO : mount: mount passed May 27 17:48:22.226095 ignition[1003]: INFO : Ignition finished successfully May 27 17:48:22.230310 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:48:22.232966 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:48:22.347732 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:48:22.349521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:48:22.385884 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1015) May 27 17:48:22.385937 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:48:22.385949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:48:22.387893 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:48:22.391231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:48:22.419689 ignition[1032]: INFO : Ignition 2.21.0 May 27 17:48:22.419689 ignition[1032]: INFO : Stage: files May 27 17:48:22.422049 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:48:22.422049 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:22.422049 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping May 27 17:48:22.422049 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:48:22.422049 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:48:22.429853 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:48:22.429853 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:48:22.429853 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:48:22.429853 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:48:22.429853 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 17:48:22.425323 unknown[1032]: wrote ssh authorized keys file for user: core May 27 17:48:22.546769 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:48:23.008332 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:48:23.008332 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:48:23.015716 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 17:48:23.331123 systemd-networkd[857]: eth0: Gained IPv6LL May 27 17:48:23.536502 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:48:23.642806 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:48:23.642806 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:48:23.646844 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:48:23.659944 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:48:23.659944 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:48:23.659944 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:48:23.670549 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:48:23.670549 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:48:23.676449 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 17:48:24.191116 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:48:24.526572 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:48:24.526572 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:48:24.530467 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:48:24.568587 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:48:24.568587 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:48:24.568587 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 17:48:24.574481 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:48:24.574481 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:48:24.574481 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 17:48:24.574481 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:48:24.597069 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:48:24.646145 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:48:24.648033 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:48:24.648033 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 17:48:24.651116 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:48:24.652729 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:48:24.654744 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:48:24.656514 ignition[1032]: INFO : files: files passed May 27 17:48:24.657341 ignition[1032]: INFO : Ignition finished successfully May 27 17:48:24.659851 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:48:24.661786 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:48:24.663958 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:48:24.695604 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:48:24.695787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:48:24.701603 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:48:24.706542 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:48:24.706542 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:48:24.710383 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:48:24.714387 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:48:24.715258 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:48:24.720171 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:48:24.762274 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:48:24.762424 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:48:24.763500 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:48:24.766263 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:48:24.766653 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:48:24.769921 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:48:24.788724 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:48:24.790807 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:48:24.821351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:48:24.822758 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:48:24.824693 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:48:24.825258 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:48:24.825396 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:48:24.831038 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:48:24.831777 systemd[1]: Stopped target basic.target - Basic System. May 27 17:48:24.832512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:48:24.836625 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:48:24.837217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:48:24.837654 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:48:24.838311 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:48:24.845938 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:48:24.848420 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:48:24.848776 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:48:24.849166 systemd[1]: Stopped target swap.target - Swaps. May 27 17:48:24.849557 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:48:24.849690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:48:24.858280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:48:24.858933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:48:24.861847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:48:24.861992 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:48:24.866314 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:48:24.866480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:48:24.867471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:48:24.867611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:48:24.870938 systemd[1]: Stopped target paths.target - Path Units. May 27 17:48:24.874317 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:48:24.878977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:48:24.881920 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:48:24.883780 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:48:24.884310 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:48:24.884420 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:48:24.885940 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:48:24.886028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:48:24.888356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:48:24.888491 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:48:24.890302 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:48:24.890410 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:48:24.896954 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:48:24.897381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:48:24.897535 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:48:24.901290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:48:24.902584 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:48:24.902758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:48:24.905789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:48:24.905949 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:48:24.913813 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:48:24.916001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:48:24.942407 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:48:24.949855 ignition[1087]: INFO : Ignition 2.21.0 May 27 17:48:24.949855 ignition[1087]: INFO : Stage: umount May 27 17:48:24.952728 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:48:24.952728 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:48:24.952728 ignition[1087]: INFO : umount: umount passed May 27 17:48:24.952728 ignition[1087]: INFO : Ignition finished successfully May 27 17:48:24.956500 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:48:24.956673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:48:24.959656 systemd[1]: Stopped target network.target - Network. May 27 17:48:24.961642 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:48:24.961730 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:48:24.962430 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:48:24.962480 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:48:24.962734 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:48:24.962794 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:48:24.963280 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:48:24.963323 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:48:24.963844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:48:24.971446 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:48:24.984082 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:48:24.984261 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:48:24.989593 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:48:24.990141 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:48:24.990212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:48:24.996955 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:48:24.997277 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:48:24.997417 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:48:25.002139 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:48:25.003705 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:48:25.005198 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:48:25.005246 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:48:25.009036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:48:25.009630 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:48:25.009703 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:48:25.010326 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:48:25.010383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:48:25.016358 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:48:25.016447 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:48:25.017207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:48:25.020416 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:48:25.041258 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:48:25.052329 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:48:25.055923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:48:25.056014 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:48:25.056790 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:48:25.056837 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:48:25.057364 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:48:25.057431 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:48:25.058353 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:48:25.058410 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:48:25.065968 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:48:25.066085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:48:25.068005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:48:25.071746 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:48:25.071831 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:48:25.079080 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:48:25.079189 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:48:25.082888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:48:25.082980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:25.087525 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:48:25.087650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:48:25.088541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:48:25.088640 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:48:25.133487 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:48:25.133641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:48:25.134686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:48:25.137329 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:48:25.137402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:48:25.138799 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:48:25.167658 systemd[1]: Switching root. May 27 17:48:25.198876 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 27 17:48:25.198954 systemd-journald[219]: Journal stopped May 27 17:48:26.688759 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:48:26.688820 kernel: SELinux: policy capability open_perms=1 May 27 17:48:26.688837 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:48:26.688849 kernel: SELinux: policy capability always_check_network=0 May 27 17:48:26.688874 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:48:26.688901 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:48:26.688913 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:48:26.688929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:48:26.688945 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:48:26.688956 kernel: audit: type=1403 audit(1748368105.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:48:26.688970 systemd[1]: Successfully loaded SELinux policy in 48.958ms. May 27 17:48:26.688992 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.141ms. May 27 17:48:26.689005 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:48:26.689018 systemd[1]: Detected virtualization kvm. May 27 17:48:26.689036 systemd[1]: Detected architecture x86-64. May 27 17:48:26.689049 systemd[1]: Detected first boot. May 27 17:48:26.689061 systemd[1]: Initializing machine ID from VM UUID. May 27 17:48:26.689073 zram_generator::config[1133]: No configuration found. May 27 17:48:26.689090 kernel: Guest personality initialized and is inactive May 27 17:48:26.689102 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:48:26.689113 kernel: Initialized host personality May 27 17:48:26.689124 kernel: NET: Registered PF_VSOCK protocol family May 27 17:48:26.689136 systemd[1]: Populated /etc with preset unit settings. May 27 17:48:26.689154 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:48:26.689166 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:48:26.689179 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:48:26.689191 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:48:26.689203 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:48:26.689216 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:48:26.689227 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:48:26.689240 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:48:26.689261 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:48:26.689274 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:48:26.689287 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:48:26.689299 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:48:26.689311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:48:26.689323 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:48:26.689336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:48:26.689348 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:48:26.689360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:48:26.689378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:48:26.689390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:48:26.689408 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:48:26.689422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:48:26.689433 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:48:26.689445 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:48:26.689457 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:48:26.689475 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:48:26.689487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:48:26.689499 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:48:26.689511 systemd[1]: Reached target slices.target - Slice Units. May 27 17:48:26.689523 systemd[1]: Reached target swap.target - Swaps. May 27 17:48:26.689535 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:48:26.689547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:48:26.689559 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:48:26.689571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:48:26.689583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:48:26.689600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:48:26.689612 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:48:26.689625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:48:26.689637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:48:26.689649 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:48:26.689661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:26.689673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:48:26.689686 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:48:26.689713 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:48:26.689726 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:48:26.689738 systemd[1]: Reached target machines.target - Containers. May 27 17:48:26.689750 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:48:26.689762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:48:26.689775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:48:26.689787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:48:26.689802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:48:26.689824 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:48:26.689843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:48:26.689860 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:48:26.689896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:48:26.689913 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:48:26.689928 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:48:26.689942 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:48:26.689957 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:48:26.689971 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:48:26.689989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:48:26.690004 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:48:26.690019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:48:26.690033 kernel: fuse: init (API version 7.41) May 27 17:48:26.690048 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:48:26.690063 kernel: loop: module loaded May 27 17:48:26.690079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:48:26.690096 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:48:26.690115 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:48:26.690135 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:48:26.690151 systemd[1]: Stopped verity-setup.service. May 27 17:48:26.690168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:26.690184 kernel: ACPI: bus type drm_connector registered May 27 17:48:26.690199 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:48:26.690219 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:48:26.690236 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:48:26.690255 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:48:26.690271 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:48:26.690287 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:48:26.690331 systemd-journald[1211]: Collecting audit messages is disabled. May 27 17:48:26.690367 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:48:26.690383 systemd-journald[1211]: Journal started May 27 17:48:26.690411 systemd-journald[1211]: Runtime Journal (/run/log/journal/cb29f0e7717d4f42bcdcc7fca767c610) is 6M, max 48.2M, 42.2M free. May 27 17:48:26.418934 systemd[1]: Queued start job for default target multi-user.target. May 27 17:48:26.441032 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:48:26.441539 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:48:26.692950 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:48:26.694730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:48:26.696381 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:48:26.696648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:48:26.698383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:48:26.698651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:48:26.700273 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:48:26.700534 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:48:26.702029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:48:26.702286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:48:26.703925 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:48:26.704187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:48:26.705672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:48:26.705960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:48:26.707493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:48:26.709269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:48:26.710985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:48:26.712653 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:48:26.728307 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:48:26.731333 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:48:26.733968 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:48:26.735301 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:48:26.735344 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:48:26.737064 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:48:26.744502 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:48:26.747162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:48:26.749927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:48:26.753837 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:48:26.755266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:48:26.765094 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:48:26.767539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:48:26.768911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:48:26.775847 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:48:26.778764 systemd-journald[1211]: Time spent on flushing to /var/log/journal/cb29f0e7717d4f42bcdcc7fca767c610 is 23.380ms for 1040 entries. May 27 17:48:26.778764 systemd-journald[1211]: System Journal (/var/log/journal/cb29f0e7717d4f42bcdcc7fca767c610) is 8M, max 195.6M, 187.6M free. May 27 17:48:26.854017 systemd-journald[1211]: Received client request to flush runtime journal. May 27 17:48:26.780390 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:48:26.783744 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:48:26.805340 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:48:26.839175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:48:26.841253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:48:26.844799 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:48:26.849072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:48:26.852377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:48:26.859953 kernel: loop0: detected capacity change from 0 to 146240 May 27 17:48:26.863481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:48:26.945953 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:48:26.946787 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:48:26.955146 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:48:26.959217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:48:26.964996 kernel: loop1: detected capacity change from 0 to 224512 May 27 17:48:26.993072 kernel: loop2: detected capacity change from 0 to 113872 May 27 17:48:27.012836 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 17:48:27.012877 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 17:48:27.025445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:48:27.036895 kernel: loop3: detected capacity change from 0 to 146240 May 27 17:48:27.074025 kernel: loop4: detected capacity change from 0 to 224512 May 27 17:48:27.084897 kernel: loop5: detected capacity change from 0 to 113872 May 27 17:48:27.100485 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:48:27.101281 (sd-merge)[1276]: Merged extensions into '/usr'. May 27 17:48:27.106484 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:48:27.106650 systemd[1]: Reloading... May 27 17:48:27.190907 zram_generator::config[1302]: No configuration found. May 27 17:48:27.342842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:48:27.395484 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:48:27.452012 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:48:27.452217 systemd[1]: Reloading finished in 345 ms. May 27 17:48:27.484210 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:48:27.486173 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:48:27.509889 systemd[1]: Starting ensure-sysext.service... May 27 17:48:27.512736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:48:27.536187 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... May 27 17:48:27.536207 systemd[1]: Reloading... May 27 17:48:27.579353 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:48:27.579408 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:48:27.579794 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:48:27.582105 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:48:27.583099 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:48:27.583370 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. May 27 17:48:27.583436 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. May 27 17:48:27.589312 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:48:27.591081 systemd-tmpfiles[1340]: Skipping /boot May 27 17:48:27.630901 zram_generator::config[1379]: No configuration found. May 27 17:48:27.633806 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:48:27.633821 systemd-tmpfiles[1340]: Skipping /boot May 27 17:48:27.717098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:48:27.811842 systemd[1]: Reloading finished in 275 ms. May 27 17:48:27.835556 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:48:27.852968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:48:27.862548 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:48:27.865138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:48:27.867787 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:48:27.881966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:48:27.886395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:48:27.892021 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:48:27.898823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:27.899084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:48:27.901514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:48:27.904513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:48:27.908345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:48:27.910102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:48:27.910259 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:48:27.921376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:48:27.922626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:27.925216 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:48:27.927915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:48:27.930754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:48:27.936929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:48:27.937219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:48:27.940188 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:48:27.940823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:48:27.945228 systemd-udevd[1410]: Using default interface naming scheme 'v255'. May 27 17:48:27.951995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:27.952283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:48:27.953957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:48:27.958102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:48:27.959660 augenrules[1439]: No rules May 27 17:48:27.966327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:48:27.967795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:48:27.968025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:48:27.971025 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:48:27.972375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:27.974012 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:48:27.974301 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:48:27.977696 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:48:27.980533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:48:27.980777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:48:27.982599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:48:27.982831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:48:27.985187 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:48:27.985538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:48:27.987478 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:48:27.994130 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:48:27.998128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:48:28.000908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:48:28.021470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:28.024793 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:48:28.026338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:48:28.029297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:48:28.034312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:48:28.042109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:48:28.052264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:48:28.053841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:48:28.054022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:48:28.059308 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:48:28.060985 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:48:28.061142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:48:28.063968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:48:28.074616 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:48:28.077334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:48:28.077632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:48:28.079737 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:48:28.080677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:48:28.085272 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:48:28.086075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:48:28.093687 systemd[1]: Finished ensure-sysext.service. May 27 17:48:28.103058 augenrules[1475]: /sbin/augenrules: No change May 27 17:48:28.109326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:48:28.109418 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:48:28.115102 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:48:28.126791 augenrules[1515]: No rules May 27 17:48:28.128977 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:48:28.132237 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:48:28.134796 systemd-resolved[1409]: Positive Trust Anchors: May 27 17:48:28.134818 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:48:28.134879 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:48:28.142482 systemd-resolved[1409]: Defaulting to hostname 'linux'. May 27 17:48:28.145074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:48:28.147308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:48:28.215253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:48:28.216893 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:48:28.218360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:48:28.252255 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:48:28.260101 systemd-networkd[1492]: lo: Link UP May 27 17:48:28.260116 systemd-networkd[1492]: lo: Gained carrier May 27 17:48:28.262234 systemd-networkd[1492]: Enumeration completed May 27 17:48:28.262622 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:48:28.263549 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:48:28.263560 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:48:28.264068 systemd[1]: Reached target network.target - Network. May 27 17:48:28.265008 systemd-networkd[1492]: eth0: Link UP May 27 17:48:28.265199 systemd-networkd[1492]: eth0: Gained carrier May 27 17:48:28.265222 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:48:28.266758 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:48:28.270929 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 17:48:28.271019 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:48:28.272451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:48:28.276937 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:48:28.282902 kernel: ACPI: button: Power Button [PWRF] May 27 17:48:28.291333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:48:28.292922 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:48:29.175659 systemd-resolved[1409]: Clock change detected. Flushing caches. May 27 17:48:29.175749 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:48:29.175806 systemd-timesyncd[1514]: Initial clock synchronization to Tue 2025-05-27 17:48:29.175604 UTC. May 27 17:48:29.176631 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:48:29.177927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:48:29.180427 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:48:29.181616 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:48:29.182897 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:48:29.182925 systemd[1]: Reached target paths.target - Path Units. May 27 17:48:29.183861 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:48:29.185101 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:48:29.186317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:48:29.187663 systemd[1]: Reached target timers.target - Timer Units. May 27 17:48:29.189595 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:48:29.192956 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:48:29.201020 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:48:29.202784 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:48:29.204125 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:48:29.214216 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 17:48:29.214549 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 17:48:29.214721 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:48:29.220248 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:48:29.221990 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:48:29.225801 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:48:29.227678 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:48:29.234532 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:48:29.236802 systemd[1]: Reached target basic.target - Basic System. May 27 17:48:29.238519 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:48:29.238542 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:48:29.240292 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:48:29.244594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:48:29.246550 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:48:29.248949 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:48:29.252211 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:48:29.254468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:48:29.256862 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:48:29.266279 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:48:29.271070 jq[1559]: false May 27 17:48:29.272567 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:48:29.276630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:48:29.279814 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:48:29.283900 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache May 27 17:48:29.283680 oslogin_cache_refresh[1561]: Refreshing passwd entry cache May 27 17:48:29.298487 extend-filesystems[1560]: Found loop3 May 27 17:48:29.298487 extend-filesystems[1560]: Found loop4 May 27 17:48:29.298487 extend-filesystems[1560]: Found loop5 May 27 17:48:29.298487 extend-filesystems[1560]: Found sr0 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda May 27 17:48:29.298487 extend-filesystems[1560]: Found vda1 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda2 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda3 May 27 17:48:29.298487 extend-filesystems[1560]: Found usr May 27 17:48:29.298487 extend-filesystems[1560]: Found vda4 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda6 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda7 May 27 17:48:29.298487 extend-filesystems[1560]: Found vda9 May 27 17:48:29.298487 extend-filesystems[1560]: Checking size of /dev/vda9 May 27 17:48:29.293991 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:48:29.299739 oslogin_cache_refresh[1561]: Failure getting users, quitting May 27 17:48:29.351479 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting May 27 17:48:29.351479 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:48:29.351479 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache May 27 17:48:29.351479 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting May 27 17:48:29.351479 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:48:29.296083 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:48:29.299760 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:48:29.296736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:48:29.299817 oslogin_cache_refresh[1561]: Refreshing group entry cache May 27 17:48:29.300486 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:48:29.317362 oslogin_cache_refresh[1561]: Failure getting groups, quitting May 27 17:48:29.302819 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:48:29.317394 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:48:29.306154 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:48:29.353509 jq[1572]: true May 27 17:48:29.308737 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:48:29.308966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:48:29.318951 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:48:29.319229 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:48:29.327746 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:48:29.328132 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:48:29.332926 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:48:29.333261 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:48:29.376851 jq[1583]: true May 27 17:48:29.391652 dbus-daemon[1556]: [system] SELinux support is enabled May 27 17:48:29.397566 update_engine[1569]: I20250527 17:48:29.397465 1569 main.cc:92] Flatcar Update Engine starting May 27 17:48:29.400842 update_engine[1569]: I20250527 17:48:29.400642 1569 update_check_scheduler.cc:74] Next update check in 3m8s May 27 17:48:29.401318 extend-filesystems[1560]: Resized partition /dev/vda9 May 27 17:48:29.401634 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:48:29.409395 tar[1577]: linux-amd64/LICENSE May 27 17:48:29.409395 tar[1577]: linux-amd64/helm May 27 17:48:29.409191 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:48:29.409320 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:48:29.413848 extend-filesystems[1600]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:48:29.423410 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:48:29.425968 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:48:29.427798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:29.429130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:48:29.429278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:48:29.431878 systemd[1]: Started update-engine.service - Update Engine. May 27 17:48:29.439661 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:48:29.468983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:48:29.470842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:29.483101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:48:29.502631 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:48:29.539474 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:48:29.539474 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:48:29.539474 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:48:29.538345 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:48:29.544713 extend-filesystems[1560]: Resized filesystem in /dev/vda9 May 27 17:48:29.539009 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:48:29.582745 bash[1618]: Updated "/home/core/.ssh/authorized_keys" May 27 17:48:29.588526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:48:29.590900 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:48:29.600834 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:48:29.642238 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:48:29.664504 kernel: kvm_amd: TSC scaling supported May 27 17:48:29.664609 kernel: kvm_amd: Nested Virtualization enabled May 27 17:48:29.664629 kernel: kvm_amd: Nested Paging enabled May 27 17:48:29.665630 kernel: kvm_amd: LBR virtualization supported May 27 17:48:29.666784 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 17:48:29.666838 kernel: kvm_amd: Virtual GIF supported May 27 17:48:29.685871 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:48:29.693109 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:48:29.743810 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:48:29.744776 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:48:29.747891 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:48:29.772397 kernel: EDAC MC: Ver: 3.0.0 May 27 17:48:29.778654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:48:29.789972 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:48:29.790000 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:48:29.791208 systemd-logind[1567]: New seat seat0. May 27 17:48:29.792130 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:48:29.804512 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:48:29.808774 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:48:29.812794 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:48:29.814644 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:48:29.918172 containerd[1593]: time="2025-05-27T17:48:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:48:29.920171 containerd[1593]: time="2025-05-27T17:48:29.920129994Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934410274Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="38.612µs" May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934456401Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934480065Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934712401Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934730896Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934761323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934846973Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:48:29.935035 containerd[1593]: time="2025-05-27T17:48:29.934860859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:48:29.935303 containerd[1593]: time="2025-05-27T17:48:29.935262603Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:48:29.935303 containerd[1593]: time="2025-05-27T17:48:29.935282941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:48:29.935303 containerd[1593]: time="2025-05-27T17:48:29.935296186Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:48:29.935358 containerd[1593]: time="2025-05-27T17:48:29.935306956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:48:29.935471 containerd[1593]: time="2025-05-27T17:48:29.935435567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:48:29.935736 containerd[1593]: time="2025-05-27T17:48:29.935700594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:48:29.935760 containerd[1593]: time="2025-05-27T17:48:29.935745659Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:48:29.935781 containerd[1593]: time="2025-05-27T17:48:29.935760707Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:48:29.935835 containerd[1593]: time="2025-05-27T17:48:29.935805030Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:48:29.936121 containerd[1593]: time="2025-05-27T17:48:29.936082090Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:48:29.936249 containerd[1593]: time="2025-05-27T17:48:29.936175194Z" level=info msg="metadata content store policy set" policy=shared May 27 17:48:30.155461 tar[1577]: linux-amd64/README.md May 27 17:48:30.160801 containerd[1593]: time="2025-05-27T17:48:30.160731347Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160834711Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160851663Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160863605Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160875988Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160885216Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160896287Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160906816Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160919971Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:48:30.160921 containerd[1593]: time="2025-05-27T17:48:30.160930030Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:48:30.161195 containerd[1593]: time="2025-05-27T17:48:30.160939628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:48:30.161195 containerd[1593]: time="2025-05-27T17:48:30.160956139Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:48:30.161195 containerd[1593]: time="2025-05-27T17:48:30.161163337Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:48:30.161195 containerd[1593]: time="2025-05-27T17:48:30.161187002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:48:30.161195 containerd[1593]: time="2025-05-27T17:48:30.161201549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161227377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161242646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161255420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161266080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161275558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:48:30.161317 containerd[1593]: time="2025-05-27T17:48:30.161308920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:48:30.161499 containerd[1593]: time="2025-05-27T17:48:30.161325131Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:48:30.161499 containerd[1593]: time="2025-05-27T17:48:30.161339237Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:48:30.161499 containerd[1593]: time="2025-05-27T17:48:30.161446879Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:48:30.161499 containerd[1593]: time="2025-05-27T17:48:30.161461096Z" level=info msg="Start snapshots syncer" May 27 17:48:30.161499 containerd[1593]: time="2025-05-27T17:48:30.161502714Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:48:30.161794 containerd[1593]: time="2025-05-27T17:48:30.161740490Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:48:30.161991 containerd[1593]: time="2025-05-27T17:48:30.161801294Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:48:30.162722 containerd[1593]: time="2025-05-27T17:48:30.162653502Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:48:30.162952 containerd[1593]: time="2025-05-27T17:48:30.162922096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:48:30.163002 containerd[1593]: time="2025-05-27T17:48:30.162961019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:48:30.163031 containerd[1593]: time="2025-05-27T17:48:30.163015952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:48:30.163053 containerd[1593]: time="2025-05-27T17:48:30.163033685Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:48:30.163073 containerd[1593]: time="2025-05-27T17:48:30.163052971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:48:30.163093 containerd[1593]: time="2025-05-27T17:48:30.163068180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:48:30.163130 containerd[1593]: time="2025-05-27T17:48:30.163111982Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:48:30.163197 containerd[1593]: time="2025-05-27T17:48:30.163179498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:48:30.163219 containerd[1593]: time="2025-05-27T17:48:30.163201169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:48:30.163239 containerd[1593]: time="2025-05-27T17:48:30.163216949Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:48:30.163286 containerd[1593]: time="2025-05-27T17:48:30.163267784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:48:30.163307 containerd[1593]: time="2025-05-27T17:48:30.163295265Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:48:30.163333 containerd[1593]: time="2025-05-27T17:48:30.163308340Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:48:30.163353 containerd[1593]: time="2025-05-27T17:48:30.163330912Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:48:30.163353 containerd[1593]: time="2025-05-27T17:48:30.163342925Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:48:30.163468 containerd[1593]: time="2025-05-27T17:48:30.163440167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:48:30.163492 containerd[1593]: time="2025-05-27T17:48:30.163467198Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:48:30.163542 containerd[1593]: time="2025-05-27T17:48:30.163525006Z" level=info msg="runtime interface created" May 27 17:48:30.163542 containerd[1593]: time="2025-05-27T17:48:30.163538792Z" level=info msg="created NRI interface" May 27 17:48:30.163580 containerd[1593]: time="2025-05-27T17:48:30.163562897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:48:30.163670 containerd[1593]: time="2025-05-27T17:48:30.163585891Z" level=info msg="Connect containerd service" May 27 17:48:30.163670 containerd[1593]: time="2025-05-27T17:48:30.163631857Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:48:30.164787 containerd[1593]: time="2025-05-27T17:48:30.164751306Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:48:30.181583 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:48:30.326557 containerd[1593]: time="2025-05-27T17:48:30.326494072Z" level=info msg="Start subscribing containerd event" May 27 17:48:30.326734 containerd[1593]: time="2025-05-27T17:48:30.326579412Z" level=info msg="Start recovering state" May 27 17:48:30.326734 containerd[1593]: time="2025-05-27T17:48:30.326698174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:48:30.326876 containerd[1593]: time="2025-05-27T17:48:30.326773455Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:48:30.326876 containerd[1593]: time="2025-05-27T17:48:30.326834861Z" level=info msg="Start event monitor" May 27 17:48:30.326876 containerd[1593]: time="2025-05-27T17:48:30.326853415Z" level=info msg="Start cni network conf syncer for default" May 27 17:48:30.326876 containerd[1593]: time="2025-05-27T17:48:30.326862432Z" level=info msg="Start streaming server" May 27 17:48:30.327004 containerd[1593]: time="2025-05-27T17:48:30.326893711Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:48:30.327004 containerd[1593]: time="2025-05-27T17:48:30.326902938Z" level=info msg="runtime interface starting up..." May 27 17:48:30.327004 containerd[1593]: time="2025-05-27T17:48:30.326913228Z" level=info msg="starting plugins..." May 27 17:48:30.327004 containerd[1593]: time="2025-05-27T17:48:30.326937744Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:48:30.327156 containerd[1593]: time="2025-05-27T17:48:30.327104797Z" level=info msg="containerd successfully booted in 0.409527s" May 27 17:48:30.327287 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:48:30.677652 systemd-networkd[1492]: eth0: Gained IPv6LL May 27 17:48:30.682179 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:48:30.685644 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:48:30.689290 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:48:30.694120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:30.697607 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:48:30.728965 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:48:30.731674 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:48:30.732030 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:48:30.734880 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:48:31.709518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:31.711413 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:48:31.713451 systemd[1]: Startup finished in 3.357s (kernel) + 7.182s (initrd) + 5.030s (userspace) = 15.570s. May 27 17:48:31.724971 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:48:32.424880 kubelet[1699]: E0527 17:48:32.424804 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:48:32.429562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:48:32.429837 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:48:32.430254 systemd[1]: kubelet.service: Consumed 1.530s CPU time, 265.4M memory peak. May 27 17:48:33.456337 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:48:33.457703 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:38524.service - OpenSSH per-connection server daemon (10.0.0.1:38524). May 27 17:48:33.591666 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:33.594444 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:33.602271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:48:33.603414 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:48:33.610976 systemd-logind[1567]: New session 1 of user core. May 27 17:48:33.704857 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:48:33.709022 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:48:33.734258 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:48:33.737105 systemd-logind[1567]: New session c1 of user core. May 27 17:48:33.939846 systemd[1716]: Queued start job for default target default.target. May 27 17:48:33.952012 systemd[1716]: Created slice app.slice - User Application Slice. May 27 17:48:33.952046 systemd[1716]: Reached target paths.target - Paths. May 27 17:48:33.952104 systemd[1716]: Reached target timers.target - Timers. May 27 17:48:33.953888 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:48:33.966016 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:48:33.966236 systemd[1716]: Reached target sockets.target - Sockets. May 27 17:48:33.966308 systemd[1716]: Reached target basic.target - Basic System. May 27 17:48:33.966391 systemd[1716]: Reached target default.target - Main User Target. May 27 17:48:33.966442 systemd[1716]: Startup finished in 217ms. May 27 17:48:33.966719 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:48:33.968894 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:48:34.036660 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:38532.service - OpenSSH per-connection server daemon (10.0.0.1:38532). May 27 17:48:34.093936 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38532 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:34.095778 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:34.101127 systemd-logind[1567]: New session 2 of user core. May 27 17:48:34.110518 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:48:34.165850 sshd[1729]: Connection closed by 10.0.0.1 port 38532 May 27 17:48:34.166278 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 27 17:48:34.179730 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:38532.service: Deactivated successfully. May 27 17:48:34.181620 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:48:34.182514 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. May 27 17:48:34.185771 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:38542.service - OpenSSH per-connection server daemon (10.0.0.1:38542). May 27 17:48:34.186561 systemd-logind[1567]: Removed session 2. May 27 17:48:34.247254 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38542 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:34.249090 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:34.253940 systemd-logind[1567]: New session 3 of user core. May 27 17:48:34.271601 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:48:34.322086 sshd[1737]: Connection closed by 10.0.0.1 port 38542 May 27 17:48:34.322482 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 27 17:48:34.340224 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:38542.service: Deactivated successfully. May 27 17:48:34.342097 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:48:34.342829 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. May 27 17:48:34.345607 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:38556.service - OpenSSH per-connection server daemon (10.0.0.1:38556). May 27 17:48:34.346335 systemd-logind[1567]: Removed session 3. May 27 17:48:34.414887 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 38556 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:34.416540 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:34.422043 systemd-logind[1567]: New session 4 of user core. May 27 17:48:34.432599 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:48:34.487871 sshd[1745]: Connection closed by 10.0.0.1 port 38556 May 27 17:48:34.488208 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 27 17:48:34.502885 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:38556.service: Deactivated successfully. May 27 17:48:34.505171 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:48:34.506266 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. May 27 17:48:34.509529 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:38570.service - OpenSSH per-connection server daemon (10.0.0.1:38570). May 27 17:48:34.510130 systemd-logind[1567]: Removed session 4. May 27 17:48:34.558547 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:34.560526 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:34.565751 systemd-logind[1567]: New session 5 of user core. May 27 17:48:34.575522 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:48:34.633596 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:48:34.633898 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:48:34.649323 sudo[1754]: pam_unix(sudo:session): session closed for user root May 27 17:48:34.650854 sshd[1753]: Connection closed by 10.0.0.1 port 38570 May 27 17:48:34.651205 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 27 17:48:34.667104 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:38570.service: Deactivated successfully. May 27 17:48:34.668798 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:48:34.669623 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. May 27 17:48:34.672342 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:38582.service - OpenSSH per-connection server daemon (10.0.0.1:38582). May 27 17:48:34.673105 systemd-logind[1567]: Removed session 5. May 27 17:48:34.731626 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 38582 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:34.733159 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:34.737677 systemd-logind[1567]: New session 6 of user core. May 27 17:48:34.755546 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:48:34.809923 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:48:34.810364 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:48:35.080153 sudo[1764]: pam_unix(sudo:session): session closed for user root May 27 17:48:35.087194 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:48:35.087572 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:48:35.098949 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:48:35.145737 augenrules[1786]: No rules May 27 17:48:35.147499 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:48:35.147788 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:48:35.149105 sudo[1763]: pam_unix(sudo:session): session closed for user root May 27 17:48:35.150806 sshd[1762]: Connection closed by 10.0.0.1 port 38582 May 27 17:48:35.151204 sshd-session[1760]: pam_unix(sshd:session): session closed for user core May 27 17:48:35.165811 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:38582.service: Deactivated successfully. May 27 17:48:35.168292 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:48:35.169189 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. May 27 17:48:35.172600 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:38594.service - OpenSSH per-connection server daemon (10.0.0.1:38594). May 27 17:48:35.173156 systemd-logind[1567]: Removed session 6. May 27 17:48:35.234733 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 38594 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:48:35.236322 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:35.241011 systemd-logind[1567]: New session 7 of user core. May 27 17:48:35.254561 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:48:35.309111 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:48:35.309461 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:48:35.997001 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:48:36.015998 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:48:36.254988 dockerd[1818]: time="2025-05-27T17:48:36.254655500Z" level=info msg="Starting up" May 27 17:48:36.256601 dockerd[1818]: time="2025-05-27T17:48:36.256551756Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:48:37.645505 dockerd[1818]: time="2025-05-27T17:48:37.645426272Z" level=info msg="Loading containers: start." May 27 17:48:37.657516 kernel: Initializing XFRM netlink socket May 27 17:48:38.062857 systemd-networkd[1492]: docker0: Link UP May 27 17:48:38.069551 dockerd[1818]: time="2025-05-27T17:48:38.069504380Z" level=info msg="Loading containers: done." May 27 17:48:38.084945 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck771650176-merged.mount: Deactivated successfully. May 27 17:48:38.086410 dockerd[1818]: time="2025-05-27T17:48:38.086348378Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:48:38.086509 dockerd[1818]: time="2025-05-27T17:48:38.086455559Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:48:38.086612 dockerd[1818]: time="2025-05-27T17:48:38.086583890Z" level=info msg="Initializing buildkit" May 27 17:48:38.121576 dockerd[1818]: time="2025-05-27T17:48:38.121510329Z" level=info msg="Completed buildkit initialization" May 27 17:48:38.125724 dockerd[1818]: time="2025-05-27T17:48:38.125653439Z" level=info msg="Daemon has completed initialization" May 27 17:48:38.125879 dockerd[1818]: time="2025-05-27T17:48:38.125738980Z" level=info msg="API listen on /run/docker.sock" May 27 17:48:38.125950 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:48:38.859309 containerd[1593]: time="2025-05-27T17:48:38.859244284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:48:39.455704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785922586.mount: Deactivated successfully. May 27 17:48:40.419551 containerd[1593]: time="2025-05-27T17:48:40.419477658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:40.420101 containerd[1593]: time="2025-05-27T17:48:40.420071041Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 17:48:40.421226 containerd[1593]: time="2025-05-27T17:48:40.421176374Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:40.424013 containerd[1593]: time="2025-05-27T17:48:40.423957369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:40.425304 containerd[1593]: time="2025-05-27T17:48:40.425246076Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.565932953s" May 27 17:48:40.425304 containerd[1593]: time="2025-05-27T17:48:40.425301280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 17:48:40.426070 containerd[1593]: time="2025-05-27T17:48:40.426042119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:48:41.595770 containerd[1593]: time="2025-05-27T17:48:41.595701999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:41.596603 containerd[1593]: time="2025-05-27T17:48:41.596577060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 17:48:41.597691 containerd[1593]: time="2025-05-27T17:48:41.597658929Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:41.600307 containerd[1593]: time="2025-05-27T17:48:41.600275696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:41.601286 containerd[1593]: time="2025-05-27T17:48:41.601241548Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.175167298s" May 27 17:48:41.601286 containerd[1593]: time="2025-05-27T17:48:41.601280030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 17:48:41.602029 containerd[1593]: time="2025-05-27T17:48:41.602002034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:48:42.680278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:48:42.681840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:43.071288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:43.085741 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:48:43.180295 containerd[1593]: time="2025-05-27T17:48:43.180231988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:43.181542 containerd[1593]: time="2025-05-27T17:48:43.181477374Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 17:48:43.183469 containerd[1593]: time="2025-05-27T17:48:43.183435466Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:43.186193 containerd[1593]: time="2025-05-27T17:48:43.186140339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:43.186571 kubelet[2100]: E0527 17:48:43.186534 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:48:43.187142 containerd[1593]: time="2025-05-27T17:48:43.187112993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.585081243s" May 27 17:48:43.187191 containerd[1593]: time="2025-05-27T17:48:43.187143820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 17:48:43.187608 containerd[1593]: time="2025-05-27T17:48:43.187578966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:48:43.193418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:48:43.193620 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:48:43.194027 systemd[1]: kubelet.service: Consumed 259ms CPU time, 111.9M memory peak. May 27 17:48:44.386386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797473610.mount: Deactivated successfully. May 27 17:48:45.236973 containerd[1593]: time="2025-05-27T17:48:45.236892701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:45.237936 containerd[1593]: time="2025-05-27T17:48:45.237867189Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 17:48:45.239466 containerd[1593]: time="2025-05-27T17:48:45.239418428Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:45.241948 containerd[1593]: time="2025-05-27T17:48:45.241885324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:45.243058 containerd[1593]: time="2025-05-27T17:48:45.243009983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.055391713s" May 27 17:48:45.243058 containerd[1593]: time="2025-05-27T17:48:45.243049838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 17:48:45.243774 containerd[1593]: time="2025-05-27T17:48:45.243730114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:48:45.767775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949964296.mount: Deactivated successfully. May 27 17:48:46.989900 containerd[1593]: time="2025-05-27T17:48:46.989813964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.991038 containerd[1593]: time="2025-05-27T17:48:46.990838506Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 17:48:46.992505 containerd[1593]: time="2025-05-27T17:48:46.992445359Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.995792 containerd[1593]: time="2025-05-27T17:48:46.995738575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.997446 containerd[1593]: time="2025-05-27T17:48:46.997354636Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.753576983s" May 27 17:48:46.997512 containerd[1593]: time="2025-05-27T17:48:46.997447049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:48:46.998072 containerd[1593]: time="2025-05-27T17:48:46.998019533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:48:48.391360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899738263.mount: Deactivated successfully. May 27 17:48:48.401252 containerd[1593]: time="2025-05-27T17:48:48.401177745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:48.402244 containerd[1593]: time="2025-05-27T17:48:48.402205392Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:48:48.403657 containerd[1593]: time="2025-05-27T17:48:48.403596441Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:48.406117 containerd[1593]: time="2025-05-27T17:48:48.405756793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:48.406567 containerd[1593]: time="2025-05-27T17:48:48.406514493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.408454194s" May 27 17:48:48.406567 containerd[1593]: time="2025-05-27T17:48:48.406556602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:48:48.408395 containerd[1593]: time="2025-05-27T17:48:48.408294552Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:48:48.966093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052428584.mount: Deactivated successfully. May 27 17:48:51.169686 containerd[1593]: time="2025-05-27T17:48:51.169595467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.170230 containerd[1593]: time="2025-05-27T17:48:51.170179933Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 17:48:51.171501 containerd[1593]: time="2025-05-27T17:48:51.171460735Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.174510 containerd[1593]: time="2025-05-27T17:48:51.174469136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.175982 containerd[1593]: time="2025-05-27T17:48:51.175924386Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.767577365s" May 27 17:48:51.175982 containerd[1593]: time="2025-05-27T17:48:51.175975521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 17:48:53.444407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:48:53.446518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:53.696205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:53.712844 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:48:53.800009 kubelet[2257]: E0527 17:48:53.799919 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:48:53.804757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:48:53.805021 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:48:53.806073 systemd[1]: kubelet.service: Consumed 289ms CPU time, 108.4M memory peak. May 27 17:48:54.234498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:54.234774 systemd[1]: kubelet.service: Consumed 289ms CPU time, 108.4M memory peak. May 27 17:48:54.237098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:54.267390 systemd[1]: Reload requested from client PID 2273 ('systemctl') (unit session-7.scope)... May 27 17:48:54.267410 systemd[1]: Reloading... May 27 17:48:54.375408 zram_generator::config[2320]: No configuration found. May 27 17:48:54.890392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:48:55.043734 systemd[1]: Reloading finished in 775 ms. May 27 17:48:55.118570 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:48:55.118690 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:48:55.119027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:55.119079 systemd[1]: kubelet.service: Consumed 178ms CPU time, 98.2M memory peak. May 27 17:48:55.120990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:55.316399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:55.321218 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:48:55.361241 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:55.361241 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:48:55.361241 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:55.361750 kubelet[2364]: I0527 17:48:55.361384 2364 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:48:55.833101 kubelet[2364]: I0527 17:48:55.832883 2364 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:48:55.833101 kubelet[2364]: I0527 17:48:55.832928 2364 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:48:55.833856 kubelet[2364]: I0527 17:48:55.833812 2364 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:48:55.871198 kubelet[2364]: E0527 17:48:55.871136 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:55.872092 kubelet[2364]: I0527 17:48:55.872061 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:48:55.913073 kubelet[2364]: I0527 17:48:55.913036 2364 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:48:55.918702 kubelet[2364]: I0527 17:48:55.918634 2364 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:48:55.919020 kubelet[2364]: I0527 17:48:55.918966 2364 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:48:55.919251 kubelet[2364]: I0527 17:48:55.919009 2364 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:48:55.920470 kubelet[2364]: I0527 17:48:55.920429 2364 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:48:55.920470 kubelet[2364]: I0527 17:48:55.920451 2364 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:48:55.920686 kubelet[2364]: I0527 17:48:55.920634 2364 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:55.930618 kubelet[2364]: I0527 17:48:55.930575 2364 kubelet.go:446] "Attempting to sync node with API server" May 27 17:48:55.930618 kubelet[2364]: I0527 17:48:55.930614 2364 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:48:55.930693 kubelet[2364]: I0527 17:48:55.930641 2364 kubelet.go:352] "Adding apiserver pod source" May 27 17:48:55.930693 kubelet[2364]: I0527 17:48:55.930657 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:48:55.945953 kubelet[2364]: W0527 17:48:55.945858 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:55.945953 kubelet[2364]: W0527 17:48:55.945871 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:55.945953 kubelet[2364]: E0527 17:48:55.945958 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:55.946155 kubelet[2364]: E0527 17:48:55.945957 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:55.946155 kubelet[2364]: I0527 17:48:55.946077 2364 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:48:55.946771 kubelet[2364]: I0527 17:48:55.946744 2364 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:48:55.946843 kubelet[2364]: W0527 17:48:55.946836 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:48:55.954442 kubelet[2364]: I0527 17:48:55.954398 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:48:55.954630 kubelet[2364]: I0527 17:48:55.954555 2364 server.go:1287] "Started kubelet" May 27 17:48:55.955655 kubelet[2364]: I0527 17:48:55.955611 2364 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:48:55.958479 kubelet[2364]: I0527 17:48:55.958397 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:48:55.958826 kubelet[2364]: I0527 17:48:55.958799 2364 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:48:55.960733 kubelet[2364]: I0527 17:48:55.960698 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:48:55.961149 kubelet[2364]: I0527 17:48:55.961126 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:48:55.967678 kubelet[2364]: I0527 17:48:55.967636 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:48:55.968108 kubelet[2364]: E0527 17:48:55.967825 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:55.968419 kubelet[2364]: I0527 17:48:55.968399 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:48:55.968497 kubelet[2364]: I0527 17:48:55.968485 2364 reconciler.go:26] "Reconciler: start to sync state" May 27 17:48:55.968896 kubelet[2364]: I0527 17:48:55.968874 2364 factory.go:221] Registration of the systemd container factory successfully May 27 17:48:55.968952 kubelet[2364]: I0527 17:48:55.968942 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:48:55.969846 kubelet[2364]: I0527 17:48:55.969801 2364 server.go:479] "Adding debug handlers to kubelet server" May 27 17:48:55.970876 kubelet[2364]: E0527 17:48:55.970849 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" May 27 17:48:55.971044 kubelet[2364]: W0527 17:48:55.971003 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:55.971895 kubelet[2364]: E0527 17:48:55.971250 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:55.971895 kubelet[2364]: I0527 17:48:55.971409 2364 factory.go:221] Registration of the containerd container factory successfully May 27 17:48:55.971895 kubelet[2364]: E0527 17:48:55.971856 2364 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:48:55.981998 kubelet[2364]: E0527 17:48:55.973670 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843738e702cec17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:48:55.954426903 +0000 UTC m=+0.629358062,LastTimestamp:2025-05-27 17:48:55.954426903 +0000 UTC m=+0.629358062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:48:55.986946 kubelet[2364]: I0527 17:48:55.986852 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:48:55.987047 kubelet[2364]: I0527 17:48:55.986971 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:48:55.987047 kubelet[2364]: I0527 17:48:55.987000 2364 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:55.990156 kubelet[2364]: I0527 17:48:55.990116 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:48:55.991359 kubelet[2364]: I0527 17:48:55.991332 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:48:55.991426 kubelet[2364]: I0527 17:48:55.991378 2364 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:48:55.992422 kubelet[2364]: I0527 17:48:55.991411 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:48:55.992422 kubelet[2364]: I0527 17:48:55.992226 2364 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:48:55.992422 kubelet[2364]: E0527 17:48:55.992287 2364 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:48:55.992422 kubelet[2364]: W0527 17:48:55.992109 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:55.992422 kubelet[2364]: E0527 17:48:55.992327 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:56.068760 kubelet[2364]: E0527 17:48:56.068707 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.093031 kubelet[2364]: E0527 17:48:56.092928 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:48:56.169359 kubelet[2364]: E0527 17:48:56.169270 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.171919 kubelet[2364]: E0527 17:48:56.171871 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" May 27 17:48:56.270510 kubelet[2364]: E0527 17:48:56.270435 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.293723 kubelet[2364]: E0527 17:48:56.293658 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:48:56.371433 kubelet[2364]: E0527 17:48:56.371253 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.472274 kubelet[2364]: E0527 17:48:56.472211 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.572965 kubelet[2364]: E0527 17:48:56.572888 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.573346 kubelet[2364]: E0527 17:48:56.573312 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" May 27 17:48:56.674071 kubelet[2364]: E0527 17:48:56.673929 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.694251 kubelet[2364]: E0527 17:48:56.694190 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:48:56.774982 kubelet[2364]: E0527 17:48:56.774923 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.875792 kubelet[2364]: E0527 17:48:56.875728 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:56.888489 kubelet[2364]: W0527 17:48:56.888430 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:56.888489 kubelet[2364]: E0527 17:48:56.888480 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:56.976955 kubelet[2364]: E0527 17:48:56.976804 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.044021 kubelet[2364]: W0527 17:48:57.043948 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:57.044021 kubelet[2364]: E0527 17:48:57.044006 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:57.056839 kubelet[2364]: W0527 17:48:57.056794 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:57.056839 kubelet[2364]: E0527 17:48:57.056821 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:57.077499 kubelet[2364]: E0527 17:48:57.077447 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.178101 kubelet[2364]: E0527 17:48:57.178032 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.278846 kubelet[2364]: E0527 17:48:57.278652 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.373786 kubelet[2364]: W0527 17:48:57.373733 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:57.373786 kubelet[2364]: E0527 17:48:57.373780 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:57.374207 kubelet[2364]: E0527 17:48:57.373921 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" May 27 17:48:57.379327 kubelet[2364]: E0527 17:48:57.379294 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.400964 kubelet[2364]: I0527 17:48:57.400933 2364 policy_none.go:49] "None policy: Start" May 27 17:48:57.401057 kubelet[2364]: I0527 17:48:57.400978 2364 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:48:57.401057 kubelet[2364]: I0527 17:48:57.400998 2364 state_mem.go:35] "Initializing new in-memory state store" May 27 17:48:57.470487 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:48:57.480252 kubelet[2364]: E0527 17:48:57.480207 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:48:57.490037 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:48:57.493361 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:48:57.495156 kubelet[2364]: E0527 17:48:57.495121 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:48:57.512277 kubelet[2364]: I0527 17:48:57.512240 2364 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:48:57.512544 kubelet[2364]: I0527 17:48:57.512527 2364 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:48:57.512588 kubelet[2364]: I0527 17:48:57.512552 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:48:57.513255 kubelet[2364]: I0527 17:48:57.513229 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:48:57.513892 kubelet[2364]: E0527 17:48:57.513869 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:48:57.513929 kubelet[2364]: E0527 17:48:57.513917 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:48:57.616817 kubelet[2364]: I0527 17:48:57.616648 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:48:57.617191 kubelet[2364]: E0527 17:48:57.617139 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 27 17:48:57.819576 kubelet[2364]: I0527 17:48:57.819531 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:48:57.820038 kubelet[2364]: E0527 17:48:57.819983 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 27 17:48:57.988876 kubelet[2364]: E0527 17:48:57.988689 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:58.221817 kubelet[2364]: I0527 17:48:58.221762 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:48:58.222382 kubelet[2364]: E0527 17:48:58.222309 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 27 17:48:58.975224 kubelet[2364]: E0527 17:48:58.975167 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="3.2s" May 27 17:48:59.018873 kubelet[2364]: W0527 17:48:59.018826 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:59.018873 kubelet[2364]: E0527 17:48:59.018880 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:59.024677 kubelet[2364]: I0527 17:48:59.024636 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:48:59.025028 kubelet[2364]: E0527 17:48:59.024979 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 27 17:48:59.045897 kubelet[2364]: W0527 17:48:59.045784 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:59.045897 kubelet[2364]: E0527 17:48:59.045845 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:59.106485 systemd[1]: Created slice kubepods-burstable-podebcc60c06c3671a2710ed15f81a2c9b8.slice - libcontainer container kubepods-burstable-podebcc60c06c3671a2710ed15f81a2c9b8.slice. May 27 17:48:59.131511 kubelet[2364]: E0527 17:48:59.131452 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:48:59.133918 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 17:48:59.148055 kubelet[2364]: E0527 17:48:59.148013 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:48:59.153755 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 17:48:59.155989 kubelet[2364]: E0527 17:48:59.155934 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:48:59.191398 kubelet[2364]: I0527 17:48:59.191299 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:48:59.191398 kubelet[2364]: I0527 17:48:59.191358 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:48:59.191398 kubelet[2364]: I0527 17:48:59.191411 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:48:59.191654 kubelet[2364]: I0527 17:48:59.191427 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:48:59.191654 kubelet[2364]: I0527 17:48:59.191446 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:48:59.191654 kubelet[2364]: I0527 17:48:59.191464 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:48:59.191654 kubelet[2364]: I0527 17:48:59.191501 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:48:59.191654 kubelet[2364]: I0527 17:48:59.191576 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:48:59.191792 kubelet[2364]: I0527 17:48:59.191592 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:48:59.432548 kubelet[2364]: E0527 17:48:59.432321 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.433605 containerd[1593]: time="2025-05-27T17:48:59.433547917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebcc60c06c3671a2710ed15f81a2c9b8,Namespace:kube-system,Attempt:0,}" May 27 17:48:59.449154 kubelet[2364]: E0527 17:48:59.449091 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.450007 containerd[1593]: time="2025-05-27T17:48:59.449724584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 17:48:59.456563 kubelet[2364]: E0527 17:48:59.456516 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.457167 containerd[1593]: time="2025-05-27T17:48:59.457120665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 17:48:59.485607 containerd[1593]: time="2025-05-27T17:48:59.485553657Z" level=info msg="connecting to shim 1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3" address="unix:///run/containerd/s/3ad253b5ec55ba28c9f4ca15f56a645106b021f87e0ac75311eb3ff6fbbdeda0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:59.498228 containerd[1593]: time="2025-05-27T17:48:59.498173593Z" level=info msg="connecting to shim e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587" address="unix:///run/containerd/s/a7faae40f835cc854ba517933913ca691a937c856d3acfdd1cb7a763ae7a331e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:59.508400 containerd[1593]: time="2025-05-27T17:48:59.507916976Z" level=info msg="connecting to shim 0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925" address="unix:///run/containerd/s/f87734557d32e9483eb7a0677ad176499297a9ea1940734eb8c589a86bec52d0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:59.586677 systemd[1]: Started cri-containerd-1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3.scope - libcontainer container 1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3. May 27 17:48:59.592105 systemd[1]: Started cri-containerd-0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925.scope - libcontainer container 0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925. May 27 17:48:59.595600 systemd[1]: Started cri-containerd-e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587.scope - libcontainer container e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587. May 27 17:48:59.656701 containerd[1593]: time="2025-05-27T17:48:59.656630837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebcc60c06c3671a2710ed15f81a2c9b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3\"" May 27 17:48:59.658606 kubelet[2364]: E0527 17:48:59.658526 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.662018 containerd[1593]: time="2025-05-27T17:48:59.661412765Z" level=info msg="CreateContainer within sandbox \"1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:48:59.677139 containerd[1593]: time="2025-05-27T17:48:59.677072663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925\"" May 27 17:48:59.678055 kubelet[2364]: E0527 17:48:59.678011 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.679662 containerd[1593]: time="2025-05-27T17:48:59.679631031Z" level=info msg="CreateContainer within sandbox \"0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:48:59.681600 containerd[1593]: time="2025-05-27T17:48:59.681558065Z" level=info msg="Container 889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:59.683481 containerd[1593]: time="2025-05-27T17:48:59.683307355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587\"" May 27 17:48:59.686433 kubelet[2364]: E0527 17:48:59.686406 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:48:59.691209 containerd[1593]: time="2025-05-27T17:48:59.691139764Z" level=info msg="CreateContainer within sandbox \"1b2cdc01059b6dc7eb97bf6ee83ae02d3db1023b70aa40e175c3c7d3785547d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d\"" May 27 17:48:59.691464 containerd[1593]: time="2025-05-27T17:48:59.691417675Z" level=info msg="CreateContainer within sandbox \"e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:48:59.694011 containerd[1593]: time="2025-05-27T17:48:59.693959362Z" level=info msg="Container 0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:59.695078 containerd[1593]: time="2025-05-27T17:48:59.695039688Z" level=info msg="StartContainer for \"889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d\"" May 27 17:48:59.696559 containerd[1593]: time="2025-05-27T17:48:59.696509654Z" level=info msg="connecting to shim 889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d" address="unix:///run/containerd/s/3ad253b5ec55ba28c9f4ca15f56a645106b021f87e0ac75311eb3ff6fbbdeda0" protocol=ttrpc version=3 May 27 17:48:59.704721 containerd[1593]: time="2025-05-27T17:48:59.704680708Z" level=info msg="Container 3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:59.710140 containerd[1593]: time="2025-05-27T17:48:59.710064945Z" level=info msg="CreateContainer within sandbox \"0229649c6743a63f36fa443bf69abba38bd0c6c9c5f7fe12ca7ab8549d87e925\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b\"" May 27 17:48:59.710799 containerd[1593]: time="2025-05-27T17:48:59.710768515Z" level=info msg="StartContainer for \"0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b\"" May 27 17:48:59.711905 containerd[1593]: time="2025-05-27T17:48:59.711871573Z" level=info msg="connecting to shim 0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b" address="unix:///run/containerd/s/f87734557d32e9483eb7a0677ad176499297a9ea1940734eb8c589a86bec52d0" protocol=ttrpc version=3 May 27 17:48:59.712761 containerd[1593]: time="2025-05-27T17:48:59.712728220Z" level=info msg="CreateContainer within sandbox \"e78560e314ad7be2a9cedf9ffbbce02518ebdd79362f9c3bda18abddf1408587\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0\"" May 27 17:48:59.714432 containerd[1593]: time="2025-05-27T17:48:59.713438071Z" level=info msg="StartContainer for \"3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0\"" May 27 17:48:59.714665 containerd[1593]: time="2025-05-27T17:48:59.714514600Z" level=info msg="connecting to shim 3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0" address="unix:///run/containerd/s/a7faae40f835cc854ba517933913ca691a937c856d3acfdd1cb7a763ae7a331e" protocol=ttrpc version=3 May 27 17:48:59.721675 systemd[1]: Started cri-containerd-889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d.scope - libcontainer container 889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d. May 27 17:48:59.744575 systemd[1]: Started cri-containerd-0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b.scope - libcontainer container 0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b. May 27 17:48:59.749516 systemd[1]: Started cri-containerd-3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0.scope - libcontainer container 3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0. May 27 17:48:59.812341 containerd[1593]: time="2025-05-27T17:48:59.812267401Z" level=info msg="StartContainer for \"889197c6d515bf9b6c1cf482669414608447438d853d82ce53270cf769d9359d\" returns successfully" May 27 17:48:59.833704 kubelet[2364]: W0527 17:48:59.833351 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 27 17:48:59.833704 kubelet[2364]: E0527 17:48:59.833724 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 27 17:48:59.844204 containerd[1593]: time="2025-05-27T17:48:59.844109076Z" level=info msg="StartContainer for \"3780a739db24400f53c46df472b632d1b67ba461539c99d13a728ca4515db2f0\" returns successfully" May 27 17:48:59.853304 containerd[1593]: time="2025-05-27T17:48:59.853258274Z" level=info msg="StartContainer for \"0fb8924ed6c2891414f814196890a4f250b2f3239c11b864d89858d4c840cf8b\" returns successfully" May 27 17:49:00.005978 kubelet[2364]: E0527 17:49:00.005824 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:49:00.005978 kubelet[2364]: E0527 17:49:00.005957 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:00.012009 kubelet[2364]: E0527 17:49:00.011966 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:49:00.012177 kubelet[2364]: E0527 17:49:00.012096 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:00.012671 kubelet[2364]: E0527 17:49:00.012506 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:49:00.012762 kubelet[2364]: E0527 17:49:00.012722 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:00.626823 kubelet[2364]: I0527 17:49:00.626781 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:49:01.014224 kubelet[2364]: E0527 17:49:01.013932 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:49:01.014224 kubelet[2364]: E0527 17:49:01.014038 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:49:01.014224 kubelet[2364]: E0527 17:49:01.014053 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:01.014224 kubelet[2364]: E0527 17:49:01.014219 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:01.393435 kubelet[2364]: I0527 17:49:01.393225 2364 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:49:01.393435 kubelet[2364]: E0527 17:49:01.393274 2364 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 17:49:01.468781 kubelet[2364]: I0527 17:49:01.468705 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:49:01.693945 kubelet[2364]: E0527 17:49:01.693792 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:49:01.693945 kubelet[2364]: I0527 17:49:01.693850 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:49:01.698788 kubelet[2364]: E0527 17:49:01.698646 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:49:01.698788 kubelet[2364]: I0527 17:49:01.698682 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:49:01.701449 kubelet[2364]: E0527 17:49:01.701422 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:49:01.936043 kubelet[2364]: I0527 17:49:01.935976 2364 apiserver.go:52] "Watching apiserver" May 27 17:49:01.969314 kubelet[2364]: I0527 17:49:01.969160 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:49:03.275917 kubelet[2364]: I0527 17:49:03.275874 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:49:03.317327 kubelet[2364]: E0527 17:49:03.317271 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:04.017851 kubelet[2364]: E0527 17:49:04.017803 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:04.834356 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... May 27 17:49:04.834399 systemd[1]: Reloading... May 27 17:49:04.922471 zram_generator::config[2690]: No configuration found. May 27 17:49:05.017268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:49:05.155605 systemd[1]: Reloading finished in 320 ms. May 27 17:49:05.183773 kubelet[2364]: I0527 17:49:05.183695 2364 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:49:05.184074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:49:05.202482 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:49:05.202796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:49:05.202858 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 131.7M memory peak. May 27 17:49:05.204910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:49:05.448306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:49:05.457053 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:49:05.511156 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:49:05.511711 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:49:05.511711 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:49:05.511885 kubelet[2732]: I0527 17:49:05.511844 2732 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:49:05.519031 kubelet[2732]: I0527 17:49:05.518990 2732 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:49:05.520015 kubelet[2732]: I0527 17:49:05.519188 2732 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:49:05.520015 kubelet[2732]: I0527 17:49:05.519477 2732 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:49:05.520723 kubelet[2732]: I0527 17:49:05.520704 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:49:05.522996 kubelet[2732]: I0527 17:49:05.522962 2732 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:49:05.527577 kubelet[2732]: I0527 17:49:05.527546 2732 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:49:05.535883 kubelet[2732]: I0527 17:49:05.535774 2732 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:49:05.536257 kubelet[2732]: I0527 17:49:05.536189 2732 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:49:05.536504 kubelet[2732]: I0527 17:49:05.536237 2732 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:49:05.536614 kubelet[2732]: I0527 17:49:05.536509 2732 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:49:05.536614 kubelet[2732]: I0527 17:49:05.536523 2732 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:49:05.536614 kubelet[2732]: I0527 17:49:05.536595 2732 state_mem.go:36] "Initialized new in-memory state store" May 27 17:49:05.536814 kubelet[2732]: I0527 17:49:05.536784 2732 kubelet.go:446] "Attempting to sync node with API server" May 27 17:49:05.536814 kubelet[2732]: I0527 17:49:05.536814 2732 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:49:05.536889 kubelet[2732]: I0527 17:49:05.536844 2732 kubelet.go:352] "Adding apiserver pod source" May 27 17:49:05.536889 kubelet[2732]: I0527 17:49:05.536859 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:49:05.540824 kubelet[2732]: I0527 17:49:05.539857 2732 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:49:05.541565 kubelet[2732]: I0527 17:49:05.541548 2732 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:49:05.543075 kubelet[2732]: I0527 17:49:05.543045 2732 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:49:05.543135 kubelet[2732]: I0527 17:49:05.543111 2732 server.go:1287] "Started kubelet" May 27 17:49:05.543629 kubelet[2732]: I0527 17:49:05.543580 2732 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:49:05.546717 kubelet[2732]: I0527 17:49:05.546109 2732 server.go:479] "Adding debug handlers to kubelet server" May 27 17:49:05.547248 kubelet[2732]: I0527 17:49:05.547219 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:49:05.548701 kubelet[2732]: I0527 17:49:05.548670 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:49:05.550302 kubelet[2732]: E0527 17:49:05.550276 2732 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:49:05.550516 kubelet[2732]: I0527 17:49:05.550442 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:49:05.550813 kubelet[2732]: I0527 17:49:05.550784 2732 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:49:05.551307 kubelet[2732]: I0527 17:49:05.551130 2732 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:49:05.551520 kubelet[2732]: I0527 17:49:05.551505 2732 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:49:05.551715 kubelet[2732]: I0527 17:49:05.551702 2732 reconciler.go:26] "Reconciler: start to sync state" May 27 17:49:05.554775 kubelet[2732]: I0527 17:49:05.554750 2732 factory.go:221] Registration of the containerd container factory successfully May 27 17:49:05.554869 kubelet[2732]: I0527 17:49:05.554859 2732 factory.go:221] Registration of the systemd container factory successfully May 27 17:49:05.555071 kubelet[2732]: I0527 17:49:05.555047 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:49:05.562316 kubelet[2732]: I0527 17:49:05.562235 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:49:05.564354 kubelet[2732]: I0527 17:49:05.563711 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:49:05.564354 kubelet[2732]: I0527 17:49:05.563747 2732 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:49:05.564354 kubelet[2732]: I0527 17:49:05.563772 2732 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:49:05.564354 kubelet[2732]: I0527 17:49:05.563791 2732 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:49:05.564354 kubelet[2732]: E0527 17:49:05.563857 2732 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:49:05.600865 kubelet[2732]: I0527 17:49:05.600819 2732 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:49:05.600865 kubelet[2732]: I0527 17:49:05.600849 2732 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:49:05.600865 kubelet[2732]: I0527 17:49:05.600875 2732 state_mem.go:36] "Initialized new in-memory state store" May 27 17:49:05.601110 kubelet[2732]: I0527 17:49:05.601087 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:49:05.601138 kubelet[2732]: I0527 17:49:05.601106 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:49:05.601138 kubelet[2732]: I0527 17:49:05.601131 2732 policy_none.go:49] "None policy: Start" May 27 17:49:05.601177 kubelet[2732]: I0527 17:49:05.601144 2732 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:49:05.601177 kubelet[2732]: I0527 17:49:05.601158 2732 state_mem.go:35] "Initializing new in-memory state store" May 27 17:49:05.601306 kubelet[2732]: I0527 17:49:05.601285 2732 state_mem.go:75] "Updated machine memory state" May 27 17:49:05.606647 kubelet[2732]: I0527 17:49:05.606619 2732 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:49:05.606865 kubelet[2732]: I0527 17:49:05.606839 2732 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:49:05.606917 kubelet[2732]: I0527 17:49:05.606858 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:49:05.607640 kubelet[2732]: I0527 17:49:05.607612 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:49:05.610087 kubelet[2732]: E0527 17:49:05.610012 2732 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:49:05.665076 kubelet[2732]: I0527 17:49:05.664989 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:49:05.665214 kubelet[2732]: I0527 17:49:05.665096 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:49:05.665214 kubelet[2732]: I0527 17:49:05.665165 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.673486 kubelet[2732]: E0527 17:49:05.673404 2732 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.714424 kubelet[2732]: I0527 17:49:05.714291 2732 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:49:05.722428 kubelet[2732]: I0527 17:49:05.722327 2732 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:49:05.722603 kubelet[2732]: I0527 17:49:05.722503 2732 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:49:05.801035 sudo[2767]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:49:05.801520 sudo[2767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:49:05.852870 kubelet[2732]: I0527 17:49:05.852814 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:49:05.852870 kubelet[2732]: I0527 17:49:05.852865 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:49:05.853013 kubelet[2732]: I0527 17:49:05.852901 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebcc60c06c3671a2710ed15f81a2c9b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebcc60c06c3671a2710ed15f81a2c9b8\") " pod="kube-system/kube-apiserver-localhost" May 27 17:49:05.853013 kubelet[2732]: I0527 17:49:05.852927 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.853013 kubelet[2732]: I0527 17:49:05.852953 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.853013 kubelet[2732]: I0527 17:49:05.852977 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.853013 kubelet[2732]: I0527 17:49:05.852997 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.853129 kubelet[2732]: I0527 17:49:05.853020 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:49:05.853129 kubelet[2732]: I0527 17:49:05.853044 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:49:05.974928 kubelet[2732]: E0527 17:49:05.974574 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:05.974928 kubelet[2732]: E0527 17:49:05.974613 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:05.974928 kubelet[2732]: E0527 17:49:05.974691 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:06.284693 sudo[2767]: pam_unix(sudo:session): session closed for user root May 27 17:49:06.537522 kubelet[2732]: I0527 17:49:06.537386 2732 apiserver.go:52] "Watching apiserver" May 27 17:49:06.552098 kubelet[2732]: I0527 17:49:06.552042 2732 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:49:06.583555 kubelet[2732]: I0527 17:49:06.583336 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:49:06.583555 kubelet[2732]: I0527 17:49:06.583388 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:49:06.583717 kubelet[2732]: E0527 17:49:06.583596 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:06.624978 kubelet[2732]: E0527 17:49:06.624706 2732 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:49:06.624978 kubelet[2732]: E0527 17:49:06.624726 2732 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 17:49:06.624978 kubelet[2732]: E0527 17:49:06.624891 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:06.624978 kubelet[2732]: E0527 17:49:06.624962 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:06.985519 kubelet[2732]: I0527 17:49:06.985017 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.984993974 podStartE2EDuration="1.984993974s" podCreationTimestamp="2025-05-27 17:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:06.868315568 +0000 UTC m=+1.405517394" watchObservedRunningTime="2025-05-27 17:49:06.984993974 +0000 UTC m=+1.522195800" May 27 17:49:06.985519 kubelet[2732]: I0527 17:49:06.985503 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.985496531 podStartE2EDuration="1.985496531s" podCreationTimestamp="2025-05-27 17:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:06.98541161 +0000 UTC m=+1.522613436" watchObservedRunningTime="2025-05-27 17:49:06.985496531 +0000 UTC m=+1.522698357" May 27 17:49:07.004270 kubelet[2732]: I0527 17:49:07.004208 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.004165193 podStartE2EDuration="4.004165193s" podCreationTimestamp="2025-05-27 17:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:06.995753988 +0000 UTC m=+1.532955824" watchObservedRunningTime="2025-05-27 17:49:07.004165193 +0000 UTC m=+1.541367019" May 27 17:49:07.542554 sudo[1798]: pam_unix(sudo:session): session closed for user root May 27 17:49:07.544050 sshd[1797]: Connection closed by 10.0.0.1 port 38594 May 27 17:49:07.544587 sshd-session[1795]: pam_unix(sshd:session): session closed for user core May 27 17:49:07.549123 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:38594.service: Deactivated successfully. May 27 17:49:07.551400 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:49:07.551668 systemd[1]: session-7.scope: Consumed 5.340s CPU time, 260.7M memory peak. May 27 17:49:07.553075 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. May 27 17:49:07.554645 systemd-logind[1567]: Removed session 7. May 27 17:49:07.584394 kubelet[2732]: E0527 17:49:07.584327 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:07.584923 kubelet[2732]: E0527 17:49:07.584412 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:08.707992 kubelet[2732]: I0527 17:49:08.707954 2732 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:49:08.709104 containerd[1593]: time="2025-05-27T17:49:08.709063706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:49:08.711404 kubelet[2732]: I0527 17:49:08.710113 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:49:09.358246 systemd[1]: Created slice kubepods-besteffort-pod1bfeb92f_c58c_491b_a029_6620703944e9.slice - libcontainer container kubepods-besteffort-pod1bfeb92f_c58c_491b_a029_6620703944e9.slice. May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376556 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-run\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376607 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-hostproc\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376631 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-lib-modules\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376651 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-xtables-lock\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376671 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bfeb92f-c58c-491b-a029-6620703944e9-kube-proxy\") pod \"kube-proxy-gj7n2\" (UID: \"1bfeb92f-c58c-491b-a029-6620703944e9\") " pod="kube-system/kube-proxy-gj7n2" May 27 17:49:09.377404 kubelet[2732]: I0527 17:49:09.376689 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-net\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376710 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-cgroup\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376726 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cni-path\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376747 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgkm\" (UniqueName: \"kubernetes.io/projected/1bfeb92f-c58c-491b-a029-6620703944e9-kube-api-access-kcgkm\") pod \"kube-proxy-gj7n2\" (UID: \"1bfeb92f-c58c-491b-a029-6620703944e9\") " pod="kube-system/kube-proxy-gj7n2" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376765 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-etc-cni-netd\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376784 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-hubble-tls\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.377872 kubelet[2732]: I0527 17:49:09.376800 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt78h\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.378033 kubelet[2732]: I0527 17:49:09.376818 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bfeb92f-c58c-491b-a029-6620703944e9-xtables-lock\") pod \"kube-proxy-gj7n2\" (UID: \"1bfeb92f-c58c-491b-a029-6620703944e9\") " pod="kube-system/kube-proxy-gj7n2" May 27 17:49:09.378033 kubelet[2732]: I0527 17:49:09.376845 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b83a14-62a9-48f1-9533-d2b3d129997a-clustermesh-secrets\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.378033 kubelet[2732]: I0527 17:49:09.376863 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-kernel\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.378033 kubelet[2732]: I0527 17:49:09.376881 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bfeb92f-c58c-491b-a029-6620703944e9-lib-modules\") pod \"kube-proxy-gj7n2\" (UID: \"1bfeb92f-c58c-491b-a029-6620703944e9\") " pod="kube-system/kube-proxy-gj7n2" May 27 17:49:09.378033 kubelet[2732]: I0527 17:49:09.376900 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-config-path\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.378174 kubelet[2732]: I0527 17:49:09.376920 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-bpf-maps\") pod \"cilium-dbw86\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " pod="kube-system/cilium-dbw86" May 27 17:49:09.382158 systemd[1]: Created slice kubepods-burstable-podf4b83a14_62a9_48f1_9533_d2b3d129997a.slice - libcontainer container kubepods-burstable-podf4b83a14_62a9_48f1_9533_d2b3d129997a.slice. May 27 17:49:09.487410 kubelet[2732]: E0527 17:49:09.487092 2732 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:49:09.487410 kubelet[2732]: E0527 17:49:09.487130 2732 projected.go:194] Error preparing data for projected volume kube-api-access-mt78h for pod kube-system/cilium-dbw86: configmap "kube-root-ca.crt" not found May 27 17:49:09.488284 kubelet[2732]: E0527 17:49:09.487556 2732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h podName:f4b83a14-62a9-48f1-9533-d2b3d129997a nodeName:}" failed. No retries permitted until 2025-05-27 17:49:09.987530245 +0000 UTC m=+4.524732132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mt78h" (UniqueName: "kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h") pod "cilium-dbw86" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a") : configmap "kube-root-ca.crt" not found May 27 17:49:09.489645 kubelet[2732]: E0527 17:49:09.489595 2732 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:49:09.489645 kubelet[2732]: E0527 17:49:09.489621 2732 projected.go:194] Error preparing data for projected volume kube-api-access-kcgkm for pod kube-system/kube-proxy-gj7n2: configmap "kube-root-ca.crt" not found May 27 17:49:09.489732 kubelet[2732]: E0527 17:49:09.489663 2732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1bfeb92f-c58c-491b-a029-6620703944e9-kube-api-access-kcgkm podName:1bfeb92f-c58c-491b-a029-6620703944e9 nodeName:}" failed. No retries permitted until 2025-05-27 17:49:09.989646627 +0000 UTC m=+4.526848453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kcgkm" (UniqueName: "kubernetes.io/projected/1bfeb92f-c58c-491b-a029-6620703944e9-kube-api-access-kcgkm") pod "kube-proxy-gj7n2" (UID: "1bfeb92f-c58c-491b-a029-6620703944e9") : configmap "kube-root-ca.crt" not found May 27 17:49:09.821393 systemd[1]: Created slice kubepods-besteffort-pod61d22639_9b6c_486e_8d11_4a1dd447a759.slice - libcontainer container kubepods-besteffort-pod61d22639_9b6c_486e_8d11_4a1dd447a759.slice. May 27 17:49:09.880876 kubelet[2732]: I0527 17:49:09.880812 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r2n\" (UniqueName: \"kubernetes.io/projected/61d22639-9b6c-486e-8d11-4a1dd447a759-kube-api-access-v8r2n\") pod \"cilium-operator-6c4d7847fc-m4chl\" (UID: \"61d22639-9b6c-486e-8d11-4a1dd447a759\") " pod="kube-system/cilium-operator-6c4d7847fc-m4chl" May 27 17:49:09.881431 kubelet[2732]: I0527 17:49:09.880910 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61d22639-9b6c-486e-8d11-4a1dd447a759-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m4chl\" (UID: \"61d22639-9b6c-486e-8d11-4a1dd447a759\") " pod="kube-system/cilium-operator-6c4d7847fc-m4chl" May 27 17:49:10.125675 kubelet[2732]: E0527 17:49:10.125503 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:10.126452 containerd[1593]: time="2025-05-27T17:49:10.126407880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4chl,Uid:61d22639-9b6c-486e-8d11-4a1dd447a759,Namespace:kube-system,Attempt:0,}" May 27 17:49:10.163927 kubelet[2732]: E0527 17:49:10.163862 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:10.274483 kubelet[2732]: E0527 17:49:10.274422 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:10.275110 containerd[1593]: time="2025-05-27T17:49:10.275054773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gj7n2,Uid:1bfeb92f-c58c-491b-a029-6620703944e9,Namespace:kube-system,Attempt:0,}" May 27 17:49:10.286725 kubelet[2732]: E0527 17:49:10.286685 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:10.287310 containerd[1593]: time="2025-05-27T17:49:10.287257733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbw86,Uid:f4b83a14-62a9-48f1-9533-d2b3d129997a,Namespace:kube-system,Attempt:0,}" May 27 17:49:10.592706 kubelet[2732]: E0527 17:49:10.592670 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:10.990394 containerd[1593]: time="2025-05-27T17:49:10.990257569Z" level=info msg="connecting to shim 6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879" address="unix:///run/containerd/s/651e13528f54d627938b69a0b3bfaefce303e7c4b2a411c14db672ca06d06c1c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:11.013127 containerd[1593]: time="2025-05-27T17:49:11.013070137Z" level=info msg="connecting to shim f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:11.022626 systemd[1]: Started cri-containerd-6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879.scope - libcontainer container 6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879. May 27 17:49:11.041587 systemd[1]: Started cri-containerd-f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31.scope - libcontainer container f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31. May 27 17:49:11.065699 containerd[1593]: time="2025-05-27T17:49:11.065639355Z" level=info msg="connecting to shim fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8" address="unix:///run/containerd/s/bcdab933bb0d1860540eda824d82858827f9be32dae751d763e422c86dc0daba" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:11.105595 systemd[1]: Started cri-containerd-fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8.scope - libcontainer container fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8. May 27 17:49:11.157537 containerd[1593]: time="2025-05-27T17:49:11.157489070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gj7n2,Uid:1bfeb92f-c58c-491b-a029-6620703944e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8\"" May 27 17:49:11.158431 kubelet[2732]: E0527 17:49:11.158362 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:11.161734 containerd[1593]: time="2025-05-27T17:49:11.160861799Z" level=info msg="CreateContainer within sandbox \"fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:49:11.163575 containerd[1593]: time="2025-05-27T17:49:11.163403842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbw86,Uid:f4b83a14-62a9-48f1-9533-d2b3d129997a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\"" May 27 17:49:11.164713 kubelet[2732]: E0527 17:49:11.164677 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:11.166550 containerd[1593]: time="2025-05-27T17:49:11.166356724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4chl,Uid:61d22639-9b6c-486e-8d11-4a1dd447a759,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\"" May 27 17:49:11.167093 containerd[1593]: time="2025-05-27T17:49:11.167059758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:49:11.167275 kubelet[2732]: E0527 17:49:11.167251 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:11.180177 containerd[1593]: time="2025-05-27T17:49:11.180133212Z" level=info msg="Container 09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:11.189105 containerd[1593]: time="2025-05-27T17:49:11.189046473Z" level=info msg="CreateContainer within sandbox \"fe011b05f003817c867ec7566e8760d877daf63159c49a556f8bbb1dd8447cb8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc\"" May 27 17:49:11.189740 containerd[1593]: time="2025-05-27T17:49:11.189700164Z" level=info msg="StartContainer for \"09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc\"" May 27 17:49:11.191228 containerd[1593]: time="2025-05-27T17:49:11.191188216Z" level=info msg="connecting to shim 09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc" address="unix:///run/containerd/s/bcdab933bb0d1860540eda824d82858827f9be32dae751d763e422c86dc0daba" protocol=ttrpc version=3 May 27 17:49:11.219725 systemd[1]: Started cri-containerd-09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc.scope - libcontainer container 09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc. May 27 17:49:11.298165 containerd[1593]: time="2025-05-27T17:49:11.298104748Z" level=info msg="StartContainer for \"09af487ddd7526aa5bcb4953aed5a701bd38b4ad6611ae8420fbfd55915750fc\" returns successfully" May 27 17:49:11.541206 kubelet[2732]: E0527 17:49:11.541165 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:11.597699 kubelet[2732]: E0527 17:49:11.597557 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:11.597699 kubelet[2732]: E0527 17:49:11.597557 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:15.145740 update_engine[1569]: I20250527 17:49:15.145658 1569 update_attempter.cc:509] Updating boot flags... May 27 17:49:15.895040 kubelet[2732]: E0527 17:49:15.894994 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:16.107902 kubelet[2732]: I0527 17:49:16.107835 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gj7n2" podStartSLOduration=7.107811505 podStartE2EDuration="7.107811505s" podCreationTimestamp="2025-05-27 17:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:11.616020122 +0000 UTC m=+6.153221948" watchObservedRunningTime="2025-05-27 17:49:16.107811505 +0000 UTC m=+10.645013331" May 27 17:49:16.111467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268543230.mount: Deactivated successfully. May 27 17:49:16.607424 kubelet[2732]: E0527 17:49:16.607361 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:21.432209 containerd[1593]: time="2025-05-27T17:49:21.432126290Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:21.441503 containerd[1593]: time="2025-05-27T17:49:21.441439974Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 17:49:21.450865 containerd[1593]: time="2025-05-27T17:49:21.450711597Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:21.453425 containerd[1593]: time="2025-05-27T17:49:21.453288951Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.286195799s" May 27 17:49:21.453425 containerd[1593]: time="2025-05-27T17:49:21.453335128Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 17:49:21.459716 containerd[1593]: time="2025-05-27T17:49:21.459656797Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:49:21.461449 containerd[1593]: time="2025-05-27T17:49:21.461399064Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:49:21.534185 containerd[1593]: time="2025-05-27T17:49:21.534134395Z" level=info msg="Container 324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:21.539190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699416225.mount: Deactivated successfully. May 27 17:49:21.549012 containerd[1593]: time="2025-05-27T17:49:21.548926606Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\"" May 27 17:49:21.549619 containerd[1593]: time="2025-05-27T17:49:21.549576182Z" level=info msg="StartContainer for \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\"" May 27 17:49:21.551144 containerd[1593]: time="2025-05-27T17:49:21.551078907Z" level=info msg="connecting to shim 324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" protocol=ttrpc version=3 May 27 17:49:21.576602 systemd[1]: Started cri-containerd-324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c.scope - libcontainer container 324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c. May 27 17:49:21.616404 containerd[1593]: time="2025-05-27T17:49:21.616336038Z" level=info msg="StartContainer for \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" returns successfully" May 27 17:49:21.625639 systemd[1]: cri-containerd-324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c.scope: Deactivated successfully. May 27 17:49:21.626101 kubelet[2732]: E0527 17:49:21.626044 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:21.637221 containerd[1593]: time="2025-05-27T17:49:21.636862137Z" level=info msg="received exit event container_id:\"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" id:\"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" pid:3172 exited_at:{seconds:1748368161 nanos:633804769}" May 27 17:49:21.637221 containerd[1593]: time="2025-05-27T17:49:21.637094045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" id:\"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" pid:3172 exited_at:{seconds:1748368161 nanos:633804769}" May 27 17:49:21.663252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c-rootfs.mount: Deactivated successfully. May 27 17:49:22.628675 kubelet[2732]: E0527 17:49:22.628534 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:22.630889 containerd[1593]: time="2025-05-27T17:49:22.630848315Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:49:22.644492 containerd[1593]: time="2025-05-27T17:49:22.644448197Z" level=info msg="Container 83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:22.651622 containerd[1593]: time="2025-05-27T17:49:22.651572194Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\"" May 27 17:49:22.652173 containerd[1593]: time="2025-05-27T17:49:22.652137710Z" level=info msg="StartContainer for \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\"" May 27 17:49:22.653155 containerd[1593]: time="2025-05-27T17:49:22.653122338Z" level=info msg="connecting to shim 83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" protocol=ttrpc version=3 May 27 17:49:22.675504 systemd[1]: Started cri-containerd-83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c.scope - libcontainer container 83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c. May 27 17:49:22.708562 containerd[1593]: time="2025-05-27T17:49:22.708509736Z" level=info msg="StartContainer for \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" returns successfully" May 27 17:49:22.724576 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:49:22.724903 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:49:22.725586 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:49:22.727203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:49:22.730303 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:49:22.730942 systemd[1]: cri-containerd-83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c.scope: Deactivated successfully. May 27 17:49:22.731640 containerd[1593]: time="2025-05-27T17:49:22.731589227Z" level=info msg="received exit event container_id:\"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" id:\"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" pid:3217 exited_at:{seconds:1748368162 nanos:730014026}" May 27 17:49:22.731810 containerd[1593]: time="2025-05-27T17:49:22.731743458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" id:\"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" pid:3217 exited_at:{seconds:1748368162 nanos:730014026}" May 27 17:49:22.757140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:49:23.632976 kubelet[2732]: E0527 17:49:23.632931 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:23.638795 containerd[1593]: time="2025-05-27T17:49:23.638751750Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:49:23.643745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c-rootfs.mount: Deactivated successfully. May 27 17:49:23.765826 containerd[1593]: time="2025-05-27T17:49:23.765355977Z" level=info msg="Container daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:23.769024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932124088.mount: Deactivated successfully. May 27 17:49:23.780440 containerd[1593]: time="2025-05-27T17:49:23.780351099Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\"" May 27 17:49:23.781009 containerd[1593]: time="2025-05-27T17:49:23.780973453Z" level=info msg="StartContainer for \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\"" May 27 17:49:23.782798 containerd[1593]: time="2025-05-27T17:49:23.782750163Z" level=info msg="connecting to shim daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" protocol=ttrpc version=3 May 27 17:49:23.785858 containerd[1593]: time="2025-05-27T17:49:23.785690897Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:23.787220 containerd[1593]: time="2025-05-27T17:49:23.787156430Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 17:49:23.788647 containerd[1593]: time="2025-05-27T17:49:23.788590685Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:23.790034 containerd[1593]: time="2025-05-27T17:49:23.789769648Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.330068589s" May 27 17:49:23.790034 containerd[1593]: time="2025-05-27T17:49:23.789799664Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 17:49:23.794697 containerd[1593]: time="2025-05-27T17:49:23.794654518Z" level=info msg="CreateContainer within sandbox \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:49:23.806612 containerd[1593]: time="2025-05-27T17:49:23.806499872Z" level=info msg="Container bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:23.815902 containerd[1593]: time="2025-05-27T17:49:23.815846495Z" level=info msg="CreateContainer within sandbox \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\"" May 27 17:49:23.816685 systemd[1]: Started cri-containerd-daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d.scope - libcontainer container daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d. May 27 17:49:23.816994 containerd[1593]: time="2025-05-27T17:49:23.816820973Z" level=info msg="StartContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\"" May 27 17:49:23.818556 containerd[1593]: time="2025-05-27T17:49:23.818509556Z" level=info msg="connecting to shim bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218" address="unix:///run/containerd/s/651e13528f54d627938b69a0b3bfaefce303e7c4b2a411c14db672ca06d06c1c" protocol=ttrpc version=3 May 27 17:49:23.839576 systemd[1]: Started cri-containerd-bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218.scope - libcontainer container bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218. May 27 17:49:23.865840 systemd[1]: cri-containerd-daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d.scope: Deactivated successfully. May 27 17:49:23.870278 containerd[1593]: time="2025-05-27T17:49:23.870233920Z" level=info msg="StartContainer for \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" returns successfully" May 27 17:49:23.871105 containerd[1593]: time="2025-05-27T17:49:23.871048496Z" level=info msg="received exit event container_id:\"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" id:\"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" pid:3282 exited_at:{seconds:1748368163 nanos:870820486}" May 27 17:49:23.872206 containerd[1593]: time="2025-05-27T17:49:23.872053611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" id:\"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" pid:3282 exited_at:{seconds:1748368163 nanos:870820486}" May 27 17:49:23.884251 containerd[1593]: time="2025-05-27T17:49:23.884140771Z" level=info msg="StartContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" returns successfully" May 27 17:49:24.637711 kubelet[2732]: E0527 17:49:24.637679 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:24.640097 kubelet[2732]: E0527 17:49:24.640068 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:24.640280 containerd[1593]: time="2025-05-27T17:49:24.640119698Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:49:24.644088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d-rootfs.mount: Deactivated successfully. May 27 17:49:25.117210 containerd[1593]: time="2025-05-27T17:49:25.117158700Z" level=info msg="Container 99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:25.120962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713544413.mount: Deactivated successfully. May 27 17:49:25.367194 containerd[1593]: time="2025-05-27T17:49:25.367143261Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\"" May 27 17:49:25.371278 containerd[1593]: time="2025-05-27T17:49:25.371179155Z" level=info msg="StartContainer for \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\"" May 27 17:49:25.375397 containerd[1593]: time="2025-05-27T17:49:25.373574979Z" level=info msg="connecting to shim 99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" protocol=ttrpc version=3 May 27 17:49:25.414855 systemd[1]: Started cri-containerd-99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe.scope - libcontainer container 99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe. May 27 17:49:25.454865 systemd[1]: cri-containerd-99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe.scope: Deactivated successfully. May 27 17:49:25.455246 containerd[1593]: time="2025-05-27T17:49:25.455115812Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" id:\"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" pid:3354 exited_at:{seconds:1748368165 nanos:454891319}" May 27 17:49:25.558621 containerd[1593]: time="2025-05-27T17:49:25.558444458Z" level=info msg="received exit event container_id:\"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" id:\"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" pid:3354 exited_at:{seconds:1748368165 nanos:454891319}" May 27 17:49:25.576426 containerd[1593]: time="2025-05-27T17:49:25.576355898Z" level=info msg="StartContainer for \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" returns successfully" May 27 17:49:25.598189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe-rootfs.mount: Deactivated successfully. May 27 17:49:25.646061 kubelet[2732]: E0527 17:49:25.645478 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:25.646061 kubelet[2732]: E0527 17:49:25.645868 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:25.697786 kubelet[2732]: I0527 17:49:25.697707 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m4chl" podStartSLOduration=4.075320981 podStartE2EDuration="16.697673679s" podCreationTimestamp="2025-05-27 17:49:09 +0000 UTC" firstStartedPulling="2025-05-27 17:49:11.168606112 +0000 UTC m=+5.705807928" lastFinishedPulling="2025-05-27 17:49:23.7909588 +0000 UTC m=+18.328160626" observedRunningTime="2025-05-27 17:49:25.370764614 +0000 UTC m=+19.907966440" watchObservedRunningTime="2025-05-27 17:49:25.697673679 +0000 UTC m=+20.234875505" May 27 17:49:26.651693 kubelet[2732]: E0527 17:49:26.651659 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:26.654969 containerd[1593]: time="2025-05-27T17:49:26.654257408Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:49:26.670409 containerd[1593]: time="2025-05-27T17:49:26.670279589Z" level=info msg="Container d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:26.675469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729053656.mount: Deactivated successfully. May 27 17:49:26.684671 containerd[1593]: time="2025-05-27T17:49:26.684292945Z" level=info msg="CreateContainer within sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\"" May 27 17:49:26.700018 containerd[1593]: time="2025-05-27T17:49:26.699948205Z" level=info msg="StartContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\"" May 27 17:49:26.701096 containerd[1593]: time="2025-05-27T17:49:26.701067614Z" level=info msg="connecting to shim d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873" address="unix:///run/containerd/s/c021c8edf35bb8d46a2292be77cf55294ba3fb518491a016f544fd026238a329" protocol=ttrpc version=3 May 27 17:49:26.720508 systemd[1]: Started cri-containerd-d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873.scope - libcontainer container d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873. May 27 17:49:26.758559 containerd[1593]: time="2025-05-27T17:49:26.758484769Z" level=info msg="StartContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" returns successfully" May 27 17:49:26.860888 containerd[1593]: time="2025-05-27T17:49:26.860827801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" id:\"80195073d1b64069c1f58b91ebfadef338727b2ad47c41cb5049f4b417db79a8\" pid:3425 exited_at:{seconds:1748368166 nanos:860178409}" May 27 17:49:26.915723 kubelet[2732]: I0527 17:49:26.915228 2732 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:49:27.024267 systemd[1]: Created slice kubepods-burstable-pod59bc7a8a_264f_4fe9_b929_7602e5d8260c.slice - libcontainer container kubepods-burstable-pod59bc7a8a_264f_4fe9_b929_7602e5d8260c.slice. May 27 17:49:27.032198 systemd[1]: Created slice kubepods-burstable-pod50ae6ea9_3ad6_407f_a52f_224041e3e252.slice - libcontainer container kubepods-burstable-pod50ae6ea9_3ad6_407f_a52f_224041e3e252.slice. May 27 17:49:27.096822 kubelet[2732]: I0527 17:49:27.096755 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50ae6ea9-3ad6-407f-a52f-224041e3e252-config-volume\") pod \"coredns-668d6bf9bc-tq6nn\" (UID: \"50ae6ea9-3ad6-407f-a52f-224041e3e252\") " pod="kube-system/coredns-668d6bf9bc-tq6nn" May 27 17:49:27.096822 kubelet[2732]: I0527 17:49:27.096827 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svpfb\" (UniqueName: \"kubernetes.io/projected/59bc7a8a-264f-4fe9-b929-7602e5d8260c-kube-api-access-svpfb\") pod \"coredns-668d6bf9bc-tmv97\" (UID: \"59bc7a8a-264f-4fe9-b929-7602e5d8260c\") " pod="kube-system/coredns-668d6bf9bc-tmv97" May 27 17:49:27.097022 kubelet[2732]: I0527 17:49:27.096857 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59bc7a8a-264f-4fe9-b929-7602e5d8260c-config-volume\") pod \"coredns-668d6bf9bc-tmv97\" (UID: \"59bc7a8a-264f-4fe9-b929-7602e5d8260c\") " pod="kube-system/coredns-668d6bf9bc-tmv97" May 27 17:49:27.097022 kubelet[2732]: I0527 17:49:27.096885 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kphh\" (UniqueName: \"kubernetes.io/projected/50ae6ea9-3ad6-407f-a52f-224041e3e252-kube-api-access-5kphh\") pod \"coredns-668d6bf9bc-tq6nn\" (UID: \"50ae6ea9-3ad6-407f-a52f-224041e3e252\") " pod="kube-system/coredns-668d6bf9bc-tq6nn" May 27 17:49:27.330607 kubelet[2732]: E0527 17:49:27.330548 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:27.332213 containerd[1593]: time="2025-05-27T17:49:27.332167715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmv97,Uid:59bc7a8a-264f-4fe9-b929-7602e5d8260c,Namespace:kube-system,Attempt:0,}" May 27 17:49:27.336056 kubelet[2732]: E0527 17:49:27.335992 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:27.336965 containerd[1593]: time="2025-05-27T17:49:27.336915275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq6nn,Uid:50ae6ea9-3ad6-407f-a52f-224041e3e252,Namespace:kube-system,Attempt:0,}" May 27 17:49:27.658471 kubelet[2732]: E0527 17:49:27.658340 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:27.739221 kubelet[2732]: I0527 17:49:27.738761 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbw86" podStartSLOduration=8.445633691 podStartE2EDuration="18.738741831s" podCreationTimestamp="2025-05-27 17:49:09 +0000 UTC" firstStartedPulling="2025-05-27 17:49:11.166294877 +0000 UTC m=+5.703496704" lastFinishedPulling="2025-05-27 17:49:21.459403018 +0000 UTC m=+15.996604844" observedRunningTime="2025-05-27 17:49:27.738270693 +0000 UTC m=+22.275472539" watchObservedRunningTime="2025-05-27 17:49:27.738741831 +0000 UTC m=+22.275943657" May 27 17:49:28.659863 kubelet[2732]: E0527 17:49:28.659819 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:29.295811 systemd-networkd[1492]: cilium_host: Link UP May 27 17:49:29.295959 systemd-networkd[1492]: cilium_net: Link UP May 27 17:49:29.296131 systemd-networkd[1492]: cilium_net: Gained carrier May 27 17:49:29.296300 systemd-networkd[1492]: cilium_host: Gained carrier May 27 17:49:29.399659 systemd-networkd[1492]: cilium_vxlan: Link UP May 27 17:49:29.399670 systemd-networkd[1492]: cilium_vxlan: Gained carrier May 27 17:49:29.565603 systemd-networkd[1492]: cilium_host: Gained IPv6LL May 27 17:49:29.654412 kernel: NET: Registered PF_ALG protocol family May 27 17:49:29.661538 kubelet[2732]: E0527 17:49:29.661512 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:30.069717 systemd-networkd[1492]: cilium_net: Gained IPv6LL May 27 17:49:30.368200 systemd-networkd[1492]: lxc_health: Link UP May 27 17:49:30.370998 systemd-networkd[1492]: lxc_health: Gained carrier May 27 17:49:30.698977 systemd-networkd[1492]: lxc6d525161beb2: Link UP May 27 17:49:30.710536 kernel: eth0: renamed from tmp7e74e May 27 17:49:30.710610 systemd-networkd[1492]: lxc6d525161beb2: Gained carrier May 27 17:49:30.757673 systemd-networkd[1492]: lxca1485a967723: Link UP May 27 17:49:30.772411 kernel: eth0: renamed from tmp9d39b May 27 17:49:30.773772 systemd-networkd[1492]: lxca1485a967723: Gained carrier May 27 17:49:31.157566 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL May 27 17:49:31.798012 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 27 17:49:32.288363 kubelet[2732]: E0527 17:49:32.288316 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:32.565634 systemd-networkd[1492]: lxca1485a967723: Gained IPv6LL May 27 17:49:32.668645 kubelet[2732]: E0527 17:49:32.668613 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:32.693637 systemd-networkd[1492]: lxc6d525161beb2: Gained IPv6LL May 27 17:49:35.464474 containerd[1593]: time="2025-05-27T17:49:35.463721838Z" level=info msg="connecting to shim 7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1" address="unix:///run/containerd/s/57b83ac050ff19fb9812e13195cd689b84335b4767fa5180a67d56cd625a5ae5" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:35.465630 containerd[1593]: time="2025-05-27T17:49:35.465437714Z" level=info msg="connecting to shim 9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9" address="unix:///run/containerd/s/0314bc4753f1dcef53093181204af653b36fcd203fb8d42cf94960ad5fc924d2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:35.500131 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:50052.service - OpenSSH per-connection server daemon (10.0.0.1:50052). May 27 17:49:35.511288 systemd[1]: Started cri-containerd-9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9.scope - libcontainer container 9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9. May 27 17:49:35.533581 systemd[1]: Started cri-containerd-7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1.scope - libcontainer container 7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1. May 27 17:49:35.541741 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:49:35.552606 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:49:35.636531 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 50052 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:35.638395 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:35.658100 systemd-logind[1567]: New session 8 of user core. May 27 17:49:35.667656 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:49:35.675254 containerd[1593]: time="2025-05-27T17:49:35.675206833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmv97,Uid:59bc7a8a-264f-4fe9-b929-7602e5d8260c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9\"" May 27 17:49:35.681505 kubelet[2732]: E0527 17:49:35.676072 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:35.682749 containerd[1593]: time="2025-05-27T17:49:35.682700030Z" level=info msg="CreateContainer within sandbox \"9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:49:35.756557 containerd[1593]: time="2025-05-27T17:49:35.755929433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq6nn,Uid:50ae6ea9-3ad6-407f-a52f-224041e3e252,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1\"" May 27 17:49:35.757140 kubelet[2732]: E0527 17:49:35.757110 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:35.759542 containerd[1593]: time="2025-05-27T17:49:35.759427158Z" level=info msg="CreateContainer within sandbox \"7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:49:35.824377 containerd[1593]: time="2025-05-27T17:49:35.824301533Z" level=info msg="Container 15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:35.829256 containerd[1593]: time="2025-05-27T17:49:35.829171449Z" level=info msg="Container fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:35.854579 containerd[1593]: time="2025-05-27T17:49:35.854515005Z" level=info msg="CreateContainer within sandbox \"9d39b47dc3ae9868c602ccfc87ce83f5805cea1bcb4a7f2230841d59cb6074e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde\"" May 27 17:49:35.859926 containerd[1593]: time="2025-05-27T17:49:35.859813506Z" level=info msg="StartContainer for \"15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde\"" May 27 17:49:35.861397 sshd[3995]: Connection closed by 10.0.0.1 port 50052 May 27 17:49:35.861802 containerd[1593]: time="2025-05-27T17:49:35.861303979Z" level=info msg="connecting to shim 15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde" address="unix:///run/containerd/s/0314bc4753f1dcef53093181204af653b36fcd203fb8d42cf94960ad5fc924d2" protocol=ttrpc version=3 May 27 17:49:35.861802 containerd[1593]: time="2025-05-27T17:49:35.861596618Z" level=info msg="CreateContainer within sandbox \"7e74ee6aa1ef4d3acaaf8cdcbcf5610cb0d4417513aa098463ddb633fbbb52d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1\"" May 27 17:49:35.862023 sshd-session[3959]: pam_unix(sshd:session): session closed for user core May 27 17:49:35.862132 containerd[1593]: time="2025-05-27T17:49:35.862108742Z" level=info msg="StartContainer for \"fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1\"" May 27 17:49:35.863260 containerd[1593]: time="2025-05-27T17:49:35.863229008Z" level=info msg="connecting to shim fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1" address="unix:///run/containerd/s/57b83ac050ff19fb9812e13195cd689b84335b4767fa5180a67d56cd625a5ae5" protocol=ttrpc version=3 May 27 17:49:35.867128 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:50052.service: Deactivated successfully. May 27 17:49:35.870243 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:49:35.873731 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. May 27 17:49:35.877532 systemd-logind[1567]: Removed session 8. May 27 17:49:35.884947 systemd[1]: Started cri-containerd-fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1.scope - libcontainer container fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1. May 27 17:49:35.893579 systemd[1]: Started cri-containerd-15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde.scope - libcontainer container 15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde. May 27 17:49:35.933524 containerd[1593]: time="2025-05-27T17:49:35.933384511Z" level=info msg="StartContainer for \"15052d94681e7751709766acfb34d502d934239f1e76dbda87cd6f41e5fd3bde\" returns successfully" May 27 17:49:35.945708 containerd[1593]: time="2025-05-27T17:49:35.945660581Z" level=info msg="StartContainer for \"fcb1d8c1e6779a2605b1c93c5c64953bc4a649ca818d3ba33b6e0f391ce98db1\" returns successfully" May 27 17:49:36.677530 kubelet[2732]: E0527 17:49:36.677229 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:36.680758 kubelet[2732]: E0527 17:49:36.680480 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:36.984332 kubelet[2732]: I0527 17:49:36.984057 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tmv97" podStartSLOduration=27.984031667 podStartE2EDuration="27.984031667s" podCreationTimestamp="2025-05-27 17:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:36.984030574 +0000 UTC m=+31.521232411" watchObservedRunningTime="2025-05-27 17:49:36.984031667 +0000 UTC m=+31.521233493" May 27 17:49:36.984332 kubelet[2732]: I0527 17:49:36.984175 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tq6nn" podStartSLOduration=27.984170358 podStartE2EDuration="27.984170358s" podCreationTimestamp="2025-05-27 17:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:36.940930008 +0000 UTC m=+31.478131824" watchObservedRunningTime="2025-05-27 17:49:36.984170358 +0000 UTC m=+31.521372174" May 27 17:49:37.682041 kubelet[2732]: E0527 17:49:37.682001 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:37.682041 kubelet[2732]: E0527 17:49:37.682001 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:38.684097 kubelet[2732]: E0527 17:49:38.684064 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:38.684902 kubelet[2732]: E0527 17:49:38.684856 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:49:40.875140 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:50056.service - OpenSSH per-connection server daemon (10.0.0.1:50056). May 27 17:49:40.936663 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 50056 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:40.938565 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:40.995513 systemd-logind[1567]: New session 9 of user core. May 27 17:49:41.007590 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:49:41.146644 sshd[4093]: Connection closed by 10.0.0.1 port 50056 May 27 17:49:41.146937 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 27 17:49:41.152317 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:50056.service: Deactivated successfully. May 27 17:49:41.154538 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:49:41.155592 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. May 27 17:49:41.157545 systemd-logind[1567]: Removed session 9. May 27 17:49:46.163789 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:59532.service - OpenSSH per-connection server daemon (10.0.0.1:59532). May 27 17:49:46.260196 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 59532 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:46.262015 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:46.267847 systemd-logind[1567]: New session 10 of user core. May 27 17:49:46.277513 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:49:46.402102 sshd[4114]: Connection closed by 10.0.0.1 port 59532 May 27 17:49:46.402501 sshd-session[4112]: pam_unix(sshd:session): session closed for user core May 27 17:49:46.408009 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:59532.service: Deactivated successfully. May 27 17:49:46.410608 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:49:46.411447 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. May 27 17:49:46.413246 systemd-logind[1567]: Removed session 10. May 27 17:49:51.423816 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:59534.service - OpenSSH per-connection server daemon (10.0.0.1:59534). May 27 17:49:51.481047 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 59534 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:51.482594 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:51.486704 systemd-logind[1567]: New session 11 of user core. May 27 17:49:51.493533 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:49:51.613970 sshd[4131]: Connection closed by 10.0.0.1 port 59534 May 27 17:49:51.614319 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 27 17:49:51.625439 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:59534.service: Deactivated successfully. May 27 17:49:51.627273 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:49:51.628255 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. May 27 17:49:51.630986 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:59542.service - OpenSSH per-connection server daemon (10.0.0.1:59542). May 27 17:49:51.631655 systemd-logind[1567]: Removed session 11. May 27 17:49:51.688209 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 59542 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:51.690019 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:51.694536 systemd-logind[1567]: New session 12 of user core. May 27 17:49:51.706546 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:49:51.862999 sshd[4147]: Connection closed by 10.0.0.1 port 59542 May 27 17:49:51.865082 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 27 17:49:51.877045 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:59542.service: Deactivated successfully. May 27 17:49:51.880011 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:49:51.882338 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. May 27 17:49:51.887986 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). May 27 17:49:51.888777 systemd-logind[1567]: Removed session 12. May 27 17:49:51.950948 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:51.952396 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:51.957436 systemd-logind[1567]: New session 13 of user core. May 27 17:49:51.971532 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:49:52.103904 sshd[4161]: Connection closed by 10.0.0.1 port 59556 May 27 17:49:52.104205 sshd-session[4159]: pam_unix(sshd:session): session closed for user core May 27 17:49:52.108929 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:59556.service: Deactivated successfully. May 27 17:49:52.110967 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:49:52.111863 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. May 27 17:49:52.113248 systemd-logind[1567]: Removed session 13. May 27 17:49:57.125247 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:50302.service - OpenSSH per-connection server daemon (10.0.0.1:50302). May 27 17:49:57.182249 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 50302 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:49:57.183967 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:57.189664 systemd-logind[1567]: New session 14 of user core. May 27 17:49:57.196529 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:49:57.326230 sshd[4181]: Connection closed by 10.0.0.1 port 50302 May 27 17:49:57.326606 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 27 17:49:57.330783 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:50302.service: Deactivated successfully. May 27 17:49:57.332763 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:49:57.333653 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. May 27 17:49:57.335155 systemd-logind[1567]: Removed session 14. May 27 17:50:02.338447 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:50304.service - OpenSSH per-connection server daemon (10.0.0.1:50304). May 27 17:50:02.390442 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 50304 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:02.392100 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:02.396592 systemd-logind[1567]: New session 15 of user core. May 27 17:50:02.405573 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:50:02.522251 sshd[4196]: Connection closed by 10.0.0.1 port 50304 May 27 17:50:02.522747 sshd-session[4194]: pam_unix(sshd:session): session closed for user core May 27 17:50:02.530972 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:50304.service: Deactivated successfully. May 27 17:50:02.532821 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:50:02.533761 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. May 27 17:50:02.537225 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:50308.service - OpenSSH per-connection server daemon (10.0.0.1:50308). May 27 17:50:02.538233 systemd-logind[1567]: Removed session 15. May 27 17:50:02.590581 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 50308 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:02.591938 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:02.596466 systemd-logind[1567]: New session 16 of user core. May 27 17:50:02.608496 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:50:02.857196 sshd[4212]: Connection closed by 10.0.0.1 port 50308 May 27 17:50:02.857652 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 27 17:50:02.867437 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:50308.service: Deactivated successfully. May 27 17:50:02.869555 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:50:02.870481 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. May 27 17:50:02.874289 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:50312.service - OpenSSH per-connection server daemon (10.0.0.1:50312). May 27 17:50:02.875055 systemd-logind[1567]: Removed session 16. May 27 17:50:02.943234 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 50312 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:02.944855 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:02.949626 systemd-logind[1567]: New session 17 of user core. May 27 17:50:02.959483 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:50:03.720548 sshd[4226]: Connection closed by 10.0.0.1 port 50312 May 27 17:50:03.721920 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 27 17:50:03.732719 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:50312.service: Deactivated successfully. May 27 17:50:03.735337 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:50:03.737245 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. May 27 17:50:03.742616 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:58958.service - OpenSSH per-connection server daemon (10.0.0.1:58958). May 27 17:50:03.746170 systemd-logind[1567]: Removed session 17. May 27 17:50:03.792750 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 58958 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:03.794262 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:03.800390 systemd-logind[1567]: New session 18 of user core. May 27 17:50:03.808500 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:50:04.035353 sshd[4248]: Connection closed by 10.0.0.1 port 58958 May 27 17:50:04.036255 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 27 17:50:04.049365 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:58958.service: Deactivated successfully. May 27 17:50:04.051359 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:50:04.053540 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. May 27 17:50:04.055719 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:58964.service - OpenSSH per-connection server daemon (10.0.0.1:58964). May 27 17:50:04.056898 systemd-logind[1567]: Removed session 18. May 27 17:50:04.108215 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 58964 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:04.109851 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:04.114880 systemd-logind[1567]: New session 19 of user core. May 27 17:50:04.129519 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:50:04.240959 sshd[4261]: Connection closed by 10.0.0.1 port 58964 May 27 17:50:04.241451 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 27 17:50:04.246204 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:58964.service: Deactivated successfully. May 27 17:50:04.249526 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:50:04.250469 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. May 27 17:50:04.252142 systemd-logind[1567]: Removed session 19. May 27 17:50:09.254612 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:58972.service - OpenSSH per-connection server daemon (10.0.0.1:58972). May 27 17:50:09.301821 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 58972 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:09.303585 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:09.308302 systemd-logind[1567]: New session 20 of user core. May 27 17:50:09.313564 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:50:09.423097 sshd[4281]: Connection closed by 10.0.0.1 port 58972 May 27 17:50:09.423528 sshd-session[4277]: pam_unix(sshd:session): session closed for user core May 27 17:50:09.427895 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:58972.service: Deactivated successfully. May 27 17:50:09.429956 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:50:09.431008 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. May 27 17:50:09.432553 systemd-logind[1567]: Removed session 20. May 27 17:50:14.437153 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:41178.service - OpenSSH per-connection server daemon (10.0.0.1:41178). May 27 17:50:14.494005 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 41178 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:14.496057 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:14.501885 systemd-logind[1567]: New session 21 of user core. May 27 17:50:14.511585 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:50:14.565121 kubelet[2732]: E0527 17:50:14.565049 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:14.632299 sshd[4298]: Connection closed by 10.0.0.1 port 41178 May 27 17:50:14.632598 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 27 17:50:14.636582 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:41178.service: Deactivated successfully. May 27 17:50:14.638808 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:50:14.641895 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. May 27 17:50:14.643110 systemd-logind[1567]: Removed session 21. May 27 17:50:18.564799 kubelet[2732]: E0527 17:50:18.564758 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:19.651201 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:41188.service - OpenSSH per-connection server daemon (10.0.0.1:41188). May 27 17:50:19.718482 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 41188 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:19.720350 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:19.725274 systemd-logind[1567]: New session 22 of user core. May 27 17:50:19.736657 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:50:19.852862 sshd[4313]: Connection closed by 10.0.0.1 port 41188 May 27 17:50:19.853209 sshd-session[4311]: pam_unix(sshd:session): session closed for user core May 27 17:50:19.857796 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:41188.service: Deactivated successfully. May 27 17:50:19.859957 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:50:19.860821 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. May 27 17:50:19.861995 systemd-logind[1567]: Removed session 22. May 27 17:50:24.564953 kubelet[2732]: E0527 17:50:24.564884 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:24.871731 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:55468.service - OpenSSH per-connection server daemon (10.0.0.1:55468). May 27 17:50:24.933916 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 55468 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:24.935830 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:24.942196 systemd-logind[1567]: New session 23 of user core. May 27 17:50:24.948509 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:50:25.063209 sshd[4328]: Connection closed by 10.0.0.1 port 55468 May 27 17:50:25.063622 sshd-session[4326]: pam_unix(sshd:session): session closed for user core May 27 17:50:25.072852 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:55468.service: Deactivated successfully. May 27 17:50:25.075508 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:50:25.076748 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. May 27 17:50:25.080806 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:55478.service - OpenSSH per-connection server daemon (10.0.0.1:55478). May 27 17:50:25.081542 systemd-logind[1567]: Removed session 23. May 27 17:50:25.138177 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 55478 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:25.139699 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:25.144325 systemd-logind[1567]: New session 24 of user core. May 27 17:50:25.152506 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:50:26.943723 containerd[1593]: time="2025-05-27T17:50:26.943655685Z" level=info msg="StopContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" with timeout 30 (s)" May 27 17:50:26.957355 containerd[1593]: time="2025-05-27T17:50:26.957307322Z" level=info msg="Stop container \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" with signal terminated" May 27 17:50:26.972569 systemd[1]: cri-containerd-bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218.scope: Deactivated successfully. May 27 17:50:26.973788 containerd[1593]: time="2025-05-27T17:50:26.973731566Z" level=info msg="received exit event container_id:\"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" id:\"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" pid:3299 exited_at:{seconds:1748368226 nanos:973053853}" May 27 17:50:26.974186 containerd[1593]: time="2025-05-27T17:50:26.974149483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" id:\"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" pid:3299 exited_at:{seconds:1748368226 nanos:973053853}" May 27 17:50:26.990770 containerd[1593]: time="2025-05-27T17:50:26.990694217Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:50:26.992693 containerd[1593]: time="2025-05-27T17:50:26.992621554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" id:\"8ff2cdddb9ecb11aeb371d190d9ea9b8a2db97c29099a1cab55e8db2d6e6889f\" pid:4368 exited_at:{seconds:1748368226 nanos:991178672}" May 27 17:50:26.996312 containerd[1593]: time="2025-05-27T17:50:26.996263930Z" level=info msg="StopContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" with timeout 2 (s)" May 27 17:50:26.996679 containerd[1593]: time="2025-05-27T17:50:26.996651950Z" level=info msg="Stop container \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" with signal terminated" May 27 17:50:27.003340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218-rootfs.mount: Deactivated successfully. May 27 17:50:27.007450 systemd-networkd[1492]: lxc_health: Link DOWN May 27 17:50:27.007462 systemd-networkd[1492]: lxc_health: Lost carrier May 27 17:50:27.026929 systemd[1]: cri-containerd-d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873.scope: Deactivated successfully. May 27 17:50:27.027440 systemd[1]: cri-containerd-d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873.scope: Consumed 7.223s CPU time, 126M memory peak, 224K read from disk, 13.3M written to disk. May 27 17:50:27.028683 containerd[1593]: time="2025-05-27T17:50:27.028645326Z" level=info msg="received exit event container_id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" pid:3392 exited_at:{seconds:1748368227 nanos:28326458}" May 27 17:50:27.028880 containerd[1593]: time="2025-05-27T17:50:27.028786986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" id:\"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" pid:3392 exited_at:{seconds:1748368227 nanos:28326458}" May 27 17:50:27.030350 containerd[1593]: time="2025-05-27T17:50:27.030325369Z" level=info msg="StopContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" returns successfully" May 27 17:50:27.033225 containerd[1593]: time="2025-05-27T17:50:27.033172417Z" level=info msg="StopPodSandbox for \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\"" May 27 17:50:27.033338 containerd[1593]: time="2025-05-27T17:50:27.033311201Z" level=info msg="Container to stop \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.041867 systemd[1]: cri-containerd-6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879.scope: Deactivated successfully. May 27 17:50:27.045425 containerd[1593]: time="2025-05-27T17:50:27.045353699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" id:\"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" pid:2876 exit_status:137 exited_at:{seconds:1748368227 nanos:44849488}" May 27 17:50:27.057614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873-rootfs.mount: Deactivated successfully. May 27 17:50:27.081466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879-rootfs.mount: Deactivated successfully. May 27 17:50:27.113290 containerd[1593]: time="2025-05-27T17:50:27.113213287Z" level=info msg="shim disconnected" id=6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879 namespace=k8s.io May 27 17:50:27.113290 containerd[1593]: time="2025-05-27T17:50:27.113269774Z" level=warning msg="cleaning up after shim disconnected" id=6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879 namespace=k8s.io May 27 17:50:27.125418 containerd[1593]: time="2025-05-27T17:50:27.113278982Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:50:27.125592 containerd[1593]: time="2025-05-27T17:50:27.114102913Z" level=info msg="StopContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" returns successfully" May 27 17:50:27.126211 containerd[1593]: time="2025-05-27T17:50:27.126174997Z" level=info msg="StopPodSandbox for \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\"" May 27 17:50:27.126336 containerd[1593]: time="2025-05-27T17:50:27.126280368Z" level=info msg="Container to stop \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.126336 containerd[1593]: time="2025-05-27T17:50:27.126297331Z" level=info msg="Container to stop \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.126336 containerd[1593]: time="2025-05-27T17:50:27.126306328Z" level=info msg="Container to stop \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.126336 containerd[1593]: time="2025-05-27T17:50:27.126314813Z" level=info msg="Container to stop \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.126336 containerd[1593]: time="2025-05-27T17:50:27.126323360Z" level=info msg="Container to stop \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:50:27.135350 systemd[1]: cri-containerd-f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31.scope: Deactivated successfully. May 27 17:50:27.154197 containerd[1593]: time="2025-05-27T17:50:27.154133648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" id:\"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" pid:2888 exit_status:137 exited_at:{seconds:1748368227 nanos:136282546}" May 27 17:50:27.157352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879-shm.mount: Deactivated successfully. May 27 17:50:27.166615 containerd[1593]: time="2025-05-27T17:50:27.166549488Z" level=info msg="received exit event sandbox_id:\"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" exit_status:137 exited_at:{seconds:1748368227 nanos:44849488}" May 27 17:50:27.167232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31-rootfs.mount: Deactivated successfully. May 27 17:50:27.168568 containerd[1593]: time="2025-05-27T17:50:27.168520785Z" level=info msg="TearDown network for sandbox \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" successfully" May 27 17:50:27.168568 containerd[1593]: time="2025-05-27T17:50:27.168564680Z" level=info msg="StopPodSandbox for \"6ee2649466fdbbd59bbf429644ccf3301b300e731c9e454f7fca05279a55b879\" returns successfully" May 27 17:50:27.173454 containerd[1593]: time="2025-05-27T17:50:27.173346685Z" level=info msg="received exit event sandbox_id:\"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" exit_status:137 exited_at:{seconds:1748368227 nanos:136282546}" May 27 17:50:27.173696 containerd[1593]: time="2025-05-27T17:50:27.173650054Z" level=info msg="TearDown network for sandbox \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" successfully" May 27 17:50:27.173696 containerd[1593]: time="2025-05-27T17:50:27.173683778Z" level=info msg="StopPodSandbox for \"f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31\" returns successfully" May 27 17:50:27.173850 containerd[1593]: time="2025-05-27T17:50:27.173780603Z" level=info msg="shim disconnected" id=f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31 namespace=k8s.io May 27 17:50:27.173850 containerd[1593]: time="2025-05-27T17:50:27.173801463Z" level=warning msg="cleaning up after shim disconnected" id=f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31 namespace=k8s.io May 27 17:50:27.173850 containerd[1593]: time="2025-05-27T17:50:27.173810480Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:50:27.324319 kubelet[2732]: I0527 17:50:27.324162 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-bpf-maps\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.324319 kubelet[2732]: I0527 17:50:27.324232 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-hostproc\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.324319 kubelet[2732]: I0527 17:50:27.324254 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-xtables-lock\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.324319 kubelet[2732]: I0527 17:50:27.324302 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-net\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.324319 kubelet[2732]: I0527 17:50:27.324364 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b83a14-62a9-48f1-9533-d2b3d129997a-clustermesh-secrets\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324408 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt78h\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324428 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-lib-modules\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324448 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61d22639-9b6c-486e-8d11-4a1dd447a759-cilium-config-path\") pod \"61d22639-9b6c-486e-8d11-4a1dd447a759\" (UID: \"61d22639-9b6c-486e-8d11-4a1dd447a759\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324469 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-kernel\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324492 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8r2n\" (UniqueName: \"kubernetes.io/projected/61d22639-9b6c-486e-8d11-4a1dd447a759-kube-api-access-v8r2n\") pod \"61d22639-9b6c-486e-8d11-4a1dd447a759\" (UID: \"61d22639-9b6c-486e-8d11-4a1dd447a759\") " May 27 17:50:27.325273 kubelet[2732]: I0527 17:50:27.324514 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-run\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324530 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-config-path\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324545 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-hubble-tls\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324568 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-cgroup\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324582 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cni-path\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324607 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-etc-cni-netd\") pod \"f4b83a14-62a9-48f1-9533-d2b3d129997a\" (UID: \"f4b83a14-62a9-48f1-9533-d2b3d129997a\") " May 27 17:50:27.325616 kubelet[2732]: I0527 17:50:27.324349 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.325873 kubelet[2732]: I0527 17:50:27.324403 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.325873 kubelet[2732]: I0527 17:50:27.324669 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.325873 kubelet[2732]: I0527 17:50:27.324714 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.325873 kubelet[2732]: I0527 17:50:27.324788 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.325873 kubelet[2732]: I0527 17:50:27.324812 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.326113 kubelet[2732]: I0527 17:50:27.324832 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.326113 kubelet[2732]: I0527 17:50:27.325317 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.326600 kubelet[2732]: I0527 17:50:27.326542 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.326749 kubelet[2732]: I0527 17:50:27.326725 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:50:27.330763 kubelet[2732]: I0527 17:50:27.330701 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:50:27.332090 kubelet[2732]: I0527 17:50:27.332033 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h" (OuterVolumeSpecName: "kube-api-access-mt78h") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "kube-api-access-mt78h". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:50:27.332784 kubelet[2732]: I0527 17:50:27.332711 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d22639-9b6c-486e-8d11-4a1dd447a759-kube-api-access-v8r2n" (OuterVolumeSpecName: "kube-api-access-v8r2n") pod "61d22639-9b6c-486e-8d11-4a1dd447a759" (UID: "61d22639-9b6c-486e-8d11-4a1dd447a759"). InnerVolumeSpecName "kube-api-access-v8r2n". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:50:27.332905 kubelet[2732]: I0527 17:50:27.332765 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b83a14-62a9-48f1-9533-d2b3d129997a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:50:27.333120 kubelet[2732]: I0527 17:50:27.333060 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d22639-9b6c-486e-8d11-4a1dd447a759-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61d22639-9b6c-486e-8d11-4a1dd447a759" (UID: "61d22639-9b6c-486e-8d11-4a1dd447a759"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:50:27.334501 kubelet[2732]: I0527 17:50:27.334451 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4b83a14-62a9-48f1-9533-d2b3d129997a" (UID: "f4b83a14-62a9-48f1-9533-d2b3d129997a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425108 2732 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425162 2732 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b83a14-62a9-48f1-9533-d2b3d129997a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425176 2732 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mt78h\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-kube-api-access-mt78h\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425188 2732 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425201 2732 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61d22639-9b6c-486e-8d11-4a1dd447a759-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425211 2732 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425222 2732 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8r2n\" (UniqueName: \"kubernetes.io/projected/61d22639-9b6c-486e-8d11-4a1dd447a759-kube-api-access-v8r2n\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425195 kubelet[2732]: I0527 17:50:27.425234 2732 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425245 2732 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425256 2732 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b83a14-62a9-48f1-9533-d2b3d129997a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425267 2732 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425277 2732 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425287 2732 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425299 2732 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425310 2732 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.425698 kubelet[2732]: I0527 17:50:27.425319 2732 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b83a14-62a9-48f1-9533-d2b3d129997a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 17:50:27.574090 systemd[1]: Removed slice kubepods-burstable-podf4b83a14_62a9_48f1_9533_d2b3d129997a.slice - libcontainer container kubepods-burstable-podf4b83a14_62a9_48f1_9533_d2b3d129997a.slice. May 27 17:50:27.574218 systemd[1]: kubepods-burstable-podf4b83a14_62a9_48f1_9533_d2b3d129997a.slice: Consumed 7.341s CPU time, 126.3M memory peak, 232K read from disk, 13.3M written to disk. May 27 17:50:27.576361 systemd[1]: Removed slice kubepods-besteffort-pod61d22639_9b6c_486e_8d11_4a1dd447a759.slice - libcontainer container kubepods-besteffort-pod61d22639_9b6c_486e_8d11_4a1dd447a759.slice. May 27 17:50:27.791720 kubelet[2732]: I0527 17:50:27.791677 2732 scope.go:117] "RemoveContainer" containerID="bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218" May 27 17:50:27.793381 containerd[1593]: time="2025-05-27T17:50:27.793337723Z" level=info msg="RemoveContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\"" May 27 17:50:27.804207 containerd[1593]: time="2025-05-27T17:50:27.804151718Z" level=info msg="RemoveContainer for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" returns successfully" May 27 17:50:27.804601 kubelet[2732]: I0527 17:50:27.804490 2732 scope.go:117] "RemoveContainer" containerID="bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218" May 27 17:50:27.804902 containerd[1593]: time="2025-05-27T17:50:27.804851021Z" level=error msg="ContainerStatus for \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\": not found" May 27 17:50:27.808567 kubelet[2732]: E0527 17:50:27.808535 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\": not found" containerID="bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218" May 27 17:50:27.808660 kubelet[2732]: I0527 17:50:27.808571 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218"} err="failed to get container status \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd6f25332348ad85e9ad46f06a1b8f62d41e2508c6987724f709f552f06d6218\": not found" May 27 17:50:27.808660 kubelet[2732]: I0527 17:50:27.808633 2732 scope.go:117] "RemoveContainer" containerID="d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873" May 27 17:50:27.810471 containerd[1593]: time="2025-05-27T17:50:27.810439325Z" level=info msg="RemoveContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\"" May 27 17:50:27.816845 containerd[1593]: time="2025-05-27T17:50:27.816660274Z" level=info msg="RemoveContainer for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" returns successfully" May 27 17:50:27.817016 kubelet[2732]: I0527 17:50:27.816968 2732 scope.go:117] "RemoveContainer" containerID="99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe" May 27 17:50:27.820149 containerd[1593]: time="2025-05-27T17:50:27.820097437Z" level=info msg="RemoveContainer for \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\"" May 27 17:50:27.828057 containerd[1593]: time="2025-05-27T17:50:27.827908057Z" level=info msg="RemoveContainer for \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" returns successfully" May 27 17:50:27.828284 kubelet[2732]: I0527 17:50:27.828242 2732 scope.go:117] "RemoveContainer" containerID="daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d" May 27 17:50:27.832053 containerd[1593]: time="2025-05-27T17:50:27.831989628Z" level=info msg="RemoveContainer for \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\"" May 27 17:50:27.842996 containerd[1593]: time="2025-05-27T17:50:27.842938662Z" level=info msg="RemoveContainer for \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" returns successfully" May 27 17:50:27.843310 kubelet[2732]: I0527 17:50:27.843251 2732 scope.go:117] "RemoveContainer" containerID="83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c" May 27 17:50:27.845277 containerd[1593]: time="2025-05-27T17:50:27.845233747Z" level=info msg="RemoveContainer for \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\"" May 27 17:50:27.850985 containerd[1593]: time="2025-05-27T17:50:27.850894549Z" level=info msg="RemoveContainer for \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" returns successfully" May 27 17:50:27.851290 kubelet[2732]: I0527 17:50:27.851253 2732 scope.go:117] "RemoveContainer" containerID="324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c" May 27 17:50:27.853281 containerd[1593]: time="2025-05-27T17:50:27.853241784Z" level=info msg="RemoveContainer for \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\"" May 27 17:50:27.858563 containerd[1593]: time="2025-05-27T17:50:27.858503715Z" level=info msg="RemoveContainer for \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" returns successfully" May 27 17:50:27.858804 kubelet[2732]: I0527 17:50:27.858767 2732 scope.go:117] "RemoveContainer" containerID="d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873" May 27 17:50:27.859122 containerd[1593]: time="2025-05-27T17:50:27.859065175Z" level=error msg="ContainerStatus for \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\": not found" May 27 17:50:27.859293 kubelet[2732]: E0527 17:50:27.859263 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\": not found" containerID="d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873" May 27 17:50:27.859354 kubelet[2732]: I0527 17:50:27.859305 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873"} err="failed to get container status \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\": rpc error: code = NotFound desc = an error occurred when try to find container \"d68be681314295fe3f688e90b641b8bf6f0ea67e00e81499b6a58209b6385873\": not found" May 27 17:50:27.859354 kubelet[2732]: I0527 17:50:27.859341 2732 scope.go:117] "RemoveContainer" containerID="99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe" May 27 17:50:27.859673 containerd[1593]: time="2025-05-27T17:50:27.859629922Z" level=error msg="ContainerStatus for \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\": not found" May 27 17:50:27.859806 kubelet[2732]: E0527 17:50:27.859781 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\": not found" containerID="99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe" May 27 17:50:27.859871 kubelet[2732]: I0527 17:50:27.859805 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe"} err="failed to get container status \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\": rpc error: code = NotFound desc = an error occurred when try to find container \"99abce249c4e6c7f69547b0360c5ac6574dfe3db987b3b2f48e155f762edcafe\": not found" May 27 17:50:27.859871 kubelet[2732]: I0527 17:50:27.859830 2732 scope.go:117] "RemoveContainer" containerID="daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d" May 27 17:50:27.860038 containerd[1593]: time="2025-05-27T17:50:27.859998705Z" level=error msg="ContainerStatus for \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\": not found" May 27 17:50:27.860160 kubelet[2732]: E0527 17:50:27.860137 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\": not found" containerID="daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d" May 27 17:50:27.860210 kubelet[2732]: I0527 17:50:27.860158 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d"} err="failed to get container status \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"daf35a2108dda15ed46b235d2649552bbebbba327b2b5a7ed833d4e566ea3c2d\": not found" May 27 17:50:27.860210 kubelet[2732]: I0527 17:50:27.860173 2732 scope.go:117] "RemoveContainer" containerID="83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c" May 27 17:50:27.860382 containerd[1593]: time="2025-05-27T17:50:27.860332893Z" level=error msg="ContainerStatus for \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\": not found" May 27 17:50:27.860536 kubelet[2732]: E0527 17:50:27.860501 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\": not found" containerID="83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c" May 27 17:50:27.860536 kubelet[2732]: I0527 17:50:27.860531 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c"} err="failed to get container status \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\": rpc error: code = NotFound desc = an error occurred when try to find container \"83ffa16efca6acead98fa39beb3fb5a456421fe790c00545b15209086bd4a59c\": not found" May 27 17:50:27.860625 kubelet[2732]: I0527 17:50:27.860544 2732 scope.go:117] "RemoveContainer" containerID="324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c" May 27 17:50:27.860751 containerd[1593]: time="2025-05-27T17:50:27.860712106Z" level=error msg="ContainerStatus for \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\": not found" May 27 17:50:27.860876 kubelet[2732]: E0527 17:50:27.860849 2732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\": not found" containerID="324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c" May 27 17:50:27.860941 kubelet[2732]: I0527 17:50:27.860879 2732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c"} err="failed to get container status \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"324dcfb9cd4bc26cfc8c5f76292be0508018512d282df1316d2573c9a31a9d1c\": not found" May 27 17:50:28.003426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2ed15a906151330320e2b5f1c51c7e9714ab6cf3998f4f6e1c2e374bbd55b31-shm.mount: Deactivated successfully. May 27 17:50:28.003583 systemd[1]: var-lib-kubelet-pods-f4b83a14\x2d62a9\x2d48f1\x2d9533\x2dd2b3d129997a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt78h.mount: Deactivated successfully. May 27 17:50:28.003689 systemd[1]: var-lib-kubelet-pods-61d22639\x2d9b6c\x2d486e\x2d8d11\x2d4a1dd447a759-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8r2n.mount: Deactivated successfully. May 27 17:50:28.003791 systemd[1]: var-lib-kubelet-pods-f4b83a14\x2d62a9\x2d48f1\x2d9533\x2dd2b3d129997a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:50:28.003888 systemd[1]: var-lib-kubelet-pods-f4b83a14\x2d62a9\x2d48f1\x2d9533\x2dd2b3d129997a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:50:28.564431 kubelet[2732]: E0527 17:50:28.564389 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:28.791839 sshd[4343]: Connection closed by 10.0.0.1 port 55478 May 27 17:50:28.792444 sshd-session[4341]: pam_unix(sshd:session): session closed for user core May 27 17:50:28.805556 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:55478.service: Deactivated successfully. May 27 17:50:28.807884 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:50:28.808773 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. May 27 17:50:28.811818 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). May 27 17:50:28.812755 systemd-logind[1567]: Removed session 24. May 27 17:50:28.875288 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:28.877082 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:28.882795 systemd-logind[1567]: New session 25 of user core. May 27 17:50:28.896718 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:50:29.481392 sshd[4499]: Connection closed by 10.0.0.1 port 55488 May 27 17:50:29.482025 sshd-session[4497]: pam_unix(sshd:session): session closed for user core May 27 17:50:29.493239 kubelet[2732]: I0527 17:50:29.493182 2732 memory_manager.go:355] "RemoveStaleState removing state" podUID="61d22639-9b6c-486e-8d11-4a1dd447a759" containerName="cilium-operator" May 27 17:50:29.493239 kubelet[2732]: I0527 17:50:29.493219 2732 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4b83a14-62a9-48f1-9533-d2b3d129997a" containerName="cilium-agent" May 27 17:50:29.496074 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:55488.service: Deactivated successfully. May 27 17:50:29.502237 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:50:29.504708 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. May 27 17:50:29.513111 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:55500.service - OpenSSH per-connection server daemon (10.0.0.1:55500). May 27 17:50:29.516731 systemd-logind[1567]: Removed session 25. May 27 17:50:29.522568 systemd[1]: Created slice kubepods-burstable-pod78cc1ab3_d2c7_4c3f_b085_8bedc9fffb13.slice - libcontainer container kubepods-burstable-pod78cc1ab3_d2c7_4c3f_b085_8bedc9fffb13.slice. May 27 17:50:29.566824 kubelet[2732]: I0527 17:50:29.566788 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d22639-9b6c-486e-8d11-4a1dd447a759" path="/var/lib/kubelet/pods/61d22639-9b6c-486e-8d11-4a1dd447a759/volumes" May 27 17:50:29.567329 kubelet[2732]: I0527 17:50:29.567310 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b83a14-62a9-48f1-9533-d2b3d129997a" path="/var/lib/kubelet/pods/f4b83a14-62a9-48f1-9533-d2b3d129997a/volumes" May 27 17:50:29.572839 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 55500 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:29.574334 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:29.578780 systemd-logind[1567]: New session 26 of user core. May 27 17:50:29.589514 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:50:29.640746 kubelet[2732]: I0527 17:50:29.640696 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-cilium-run\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.640746 kubelet[2732]: I0527 17:50:29.640740 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-cilium-cgroup\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641004 kubelet[2732]: I0527 17:50:29.640761 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-etc-cni-netd\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641004 kubelet[2732]: I0527 17:50:29.640778 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-cilium-config-path\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641004 kubelet[2732]: I0527 17:50:29.640796 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rlc\" (UniqueName: \"kubernetes.io/projected/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-kube-api-access-j8rlc\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641004 kubelet[2732]: I0527 17:50:29.640907 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-xtables-lock\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641004 kubelet[2732]: I0527 17:50:29.640963 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-bpf-maps\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641136 kubelet[2732]: I0527 17:50:29.641022 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-host-proc-sys-kernel\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641136 kubelet[2732]: I0527 17:50:29.641085 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-hostproc\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641136 kubelet[2732]: I0527 17:50:29.641111 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-cni-path\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641204 kubelet[2732]: I0527 17:50:29.641132 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-cilium-ipsec-secrets\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641204 kubelet[2732]: I0527 17:50:29.641156 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-clustermesh-secrets\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641204 kubelet[2732]: I0527 17:50:29.641175 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-host-proc-sys-net\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641204 kubelet[2732]: I0527 17:50:29.641194 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-hubble-tls\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.641287 kubelet[2732]: I0527 17:50:29.641226 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13-lib-modules\") pod \"cilium-pv9b5\" (UID: \"78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13\") " pod="kube-system/cilium-pv9b5" May 27 17:50:29.642875 sshd[4513]: Connection closed by 10.0.0.1 port 55500 May 27 17:50:29.643391 sshd-session[4511]: pam_unix(sshd:session): session closed for user core May 27 17:50:29.659232 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:55500.service: Deactivated successfully. May 27 17:50:29.661081 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:50:29.661927 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. May 27 17:50:29.664797 systemd[1]: Started sshd@26-10.0.0.132:22-10.0.0.1:55508.service - OpenSSH per-connection server daemon (10.0.0.1:55508). May 27 17:50:29.665834 systemd-logind[1567]: Removed session 26. May 27 17:50:29.717091 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 55508 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:50:29.718615 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:29.723187 systemd-logind[1567]: New session 27 of user core. May 27 17:50:29.735515 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:50:29.836541 kubelet[2732]: E0527 17:50:29.836502 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:29.837585 containerd[1593]: time="2025-05-27T17:50:29.837523981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pv9b5,Uid:78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13,Namespace:kube-system,Attempt:0,}" May 27 17:50:29.857152 containerd[1593]: time="2025-05-27T17:50:29.857036240Z" level=info msg="connecting to shim 45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:50:29.881521 systemd[1]: Started cri-containerd-45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24.scope - libcontainer container 45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24. May 27 17:50:29.909267 containerd[1593]: time="2025-05-27T17:50:29.909227728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pv9b5,Uid:78cc1ab3-d2c7-4c3f-b085-8bedc9fffb13,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\"" May 27 17:50:29.910312 kubelet[2732]: E0527 17:50:29.909956 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:29.913455 containerd[1593]: time="2025-05-27T17:50:29.913415935Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:50:29.923579 containerd[1593]: time="2025-05-27T17:50:29.923525323Z" level=info msg="Container 3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe: CDI devices from CRI Config.CDIDevices: []" May 27 17:50:29.942521 containerd[1593]: time="2025-05-27T17:50:29.942469729Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\"" May 27 17:50:29.942983 containerd[1593]: time="2025-05-27T17:50:29.942957609Z" level=info msg="StartContainer for \"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\"" May 27 17:50:29.944242 containerd[1593]: time="2025-05-27T17:50:29.944214683Z" level=info msg="connecting to shim 3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" protocol=ttrpc version=3 May 27 17:50:29.966527 systemd[1]: Started cri-containerd-3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe.scope - libcontainer container 3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe. May 27 17:50:30.004342 containerd[1593]: time="2025-05-27T17:50:30.004171230Z" level=info msg="StartContainer for \"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\" returns successfully" May 27 17:50:30.014165 systemd[1]: cri-containerd-3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe.scope: Deactivated successfully. May 27 17:50:30.015541 containerd[1593]: time="2025-05-27T17:50:30.015468161Z" level=info msg="received exit event container_id:\"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\" id:\"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\" pid:4593 exited_at:{seconds:1748368230 nanos:15003096}" May 27 17:50:30.015700 containerd[1593]: time="2025-05-27T17:50:30.015637955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\" id:\"3f965420132c2ca421690ea9ac6a3c208be071de62daa40845549e0962427abe\" pid:4593 exited_at:{seconds:1748368230 nanos:15003096}" May 27 17:50:30.634954 kubelet[2732]: E0527 17:50:30.634878 2732 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:50:30.806821 kubelet[2732]: E0527 17:50:30.806785 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:30.809394 containerd[1593]: time="2025-05-27T17:50:30.809190029Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:50:30.818393 containerd[1593]: time="2025-05-27T17:50:30.817540259Z" level=info msg="Container b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84: CDI devices from CRI Config.CDIDevices: []" May 27 17:50:30.826919 containerd[1593]: time="2025-05-27T17:50:30.826872198Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\"" May 27 17:50:30.827448 containerd[1593]: time="2025-05-27T17:50:30.827419140Z" level=info msg="StartContainer for \"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\"" May 27 17:50:30.828356 containerd[1593]: time="2025-05-27T17:50:30.828322460Z" level=info msg="connecting to shim b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" protocol=ttrpc version=3 May 27 17:50:30.848520 systemd[1]: Started cri-containerd-b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84.scope - libcontainer container b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84. May 27 17:50:30.882233 containerd[1593]: time="2025-05-27T17:50:30.882170423Z" level=info msg="StartContainer for \"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\" returns successfully" May 27 17:50:30.887765 systemd[1]: cri-containerd-b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84.scope: Deactivated successfully. May 27 17:50:30.889274 containerd[1593]: time="2025-05-27T17:50:30.889233092Z" level=info msg="received exit event container_id:\"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\" id:\"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\" pid:4638 exited_at:{seconds:1748368230 nanos:888931878}" May 27 17:50:30.889676 containerd[1593]: time="2025-05-27T17:50:30.889637122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\" id:\"b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84\" pid:4638 exited_at:{seconds:1748368230 nanos:888931878}" May 27 17:50:30.912051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b072f1bf9b330d59b9cd683d3576370bce356f791634837bcd7671fc6a20ab84-rootfs.mount: Deactivated successfully. May 27 17:50:31.811160 kubelet[2732]: E0527 17:50:31.811110 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:31.812848 containerd[1593]: time="2025-05-27T17:50:31.812797780Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:50:31.827656 containerd[1593]: time="2025-05-27T17:50:31.827596939Z" level=info msg="Container b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d: CDI devices from CRI Config.CDIDevices: []" May 27 17:50:31.841309 containerd[1593]: time="2025-05-27T17:50:31.841248553Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\"" May 27 17:50:31.841934 containerd[1593]: time="2025-05-27T17:50:31.841898530Z" level=info msg="StartContainer for \"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\"" May 27 17:50:31.843609 containerd[1593]: time="2025-05-27T17:50:31.843542900Z" level=info msg="connecting to shim b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" protocol=ttrpc version=3 May 27 17:50:31.867754 systemd[1]: Started cri-containerd-b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d.scope - libcontainer container b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d. May 27 17:50:31.917749 systemd[1]: cri-containerd-b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d.scope: Deactivated successfully. May 27 17:50:31.919358 containerd[1593]: time="2025-05-27T17:50:31.919269360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\" id:\"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\" pid:4682 exited_at:{seconds:1748368231 nanos:918988796}" May 27 17:50:31.919358 containerd[1593]: time="2025-05-27T17:50:31.919341467Z" level=info msg="received exit event container_id:\"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\" id:\"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\" pid:4682 exited_at:{seconds:1748368231 nanos:918988796}" May 27 17:50:31.931005 containerd[1593]: time="2025-05-27T17:50:31.930960624Z" level=info msg="StartContainer for \"b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d\" returns successfully" May 27 17:50:31.944903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d6a2cb3225c5150612f926add296350a84c460defe0b90c58a19a7ef6f0d4d-rootfs.mount: Deactivated successfully. May 27 17:50:32.816789 kubelet[2732]: E0527 17:50:32.816754 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:32.818460 containerd[1593]: time="2025-05-27T17:50:32.818341460Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:50:32.827405 containerd[1593]: time="2025-05-27T17:50:32.826969698Z" level=info msg="Container 25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2: CDI devices from CRI Config.CDIDevices: []" May 27 17:50:32.836919 containerd[1593]: time="2025-05-27T17:50:32.836874284Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\"" May 27 17:50:32.837709 containerd[1593]: time="2025-05-27T17:50:32.837412938Z" level=info msg="StartContainer for \"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\"" May 27 17:50:32.838477 containerd[1593]: time="2025-05-27T17:50:32.838391290Z" level=info msg="connecting to shim 25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" protocol=ttrpc version=3 May 27 17:50:32.862664 systemd[1]: Started cri-containerd-25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2.scope - libcontainer container 25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2. May 27 17:50:32.893010 systemd[1]: cri-containerd-25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2.scope: Deactivated successfully. May 27 17:50:32.893763 containerd[1593]: time="2025-05-27T17:50:32.893685901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\" id:\"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\" pid:4720 exited_at:{seconds:1748368232 nanos:893291862}" May 27 17:50:32.895556 containerd[1593]: time="2025-05-27T17:50:32.895527636Z" level=info msg="received exit event container_id:\"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\" id:\"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\" pid:4720 exited_at:{seconds:1748368232 nanos:893291862}" May 27 17:50:32.898197 containerd[1593]: time="2025-05-27T17:50:32.898160305Z" level=info msg="StartContainer for \"25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2\" returns successfully" May 27 17:50:32.925275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25abd6ca814268337b3aa04905f0eb223ef0f1fb21acedd5d55b8fe0352606d2-rootfs.mount: Deactivated successfully. May 27 17:50:33.822049 kubelet[2732]: E0527 17:50:33.822013 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:33.823695 containerd[1593]: time="2025-05-27T17:50:33.823646976Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:50:33.835676 containerd[1593]: time="2025-05-27T17:50:33.835633165Z" level=info msg="Container f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3: CDI devices from CRI Config.CDIDevices: []" May 27 17:50:33.847164 containerd[1593]: time="2025-05-27T17:50:33.847087181Z" level=info msg="CreateContainer within sandbox \"45ce54524f2b77258440164382348e80d49e73b0f4df4dd3c4f6975829880e24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\"" May 27 17:50:33.847718 containerd[1593]: time="2025-05-27T17:50:33.847648749Z" level=info msg="StartContainer for \"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\"" May 27 17:50:33.848664 containerd[1593]: time="2025-05-27T17:50:33.848602814Z" level=info msg="connecting to shim f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3" address="unix:///run/containerd/s/459e07630d0c6aea16eaba7b7097aafa1da976a03b3666d484cd1f2032b90d1b" protocol=ttrpc version=3 May 27 17:50:33.882845 systemd[1]: Started cri-containerd-f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3.scope - libcontainer container f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3. May 27 17:50:33.943242 containerd[1593]: time="2025-05-27T17:50:33.943193021Z" level=info msg="StartContainer for \"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" returns successfully" May 27 17:50:34.022500 containerd[1593]: time="2025-05-27T17:50:34.022443408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"f3e8c67994bb821e8f7af13a2d6f788258140cd5c05d91ba05f05cdf5b294f3a\" pid:4790 exited_at:{seconds:1748368234 nanos:22095997}" May 27 17:50:34.437425 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 17:50:34.828630 kubelet[2732]: E0527 17:50:34.828593 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:34.845934 kubelet[2732]: I0527 17:50:34.845827 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pv9b5" podStartSLOduration=5.845752705 podStartE2EDuration="5.845752705s" podCreationTimestamp="2025-05-27 17:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:50:34.845537727 +0000 UTC m=+89.382739563" watchObservedRunningTime="2025-05-27 17:50:34.845752705 +0000 UTC m=+89.382954531" May 27 17:50:35.838019 kubelet[2732]: E0527 17:50:35.837949 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:36.070495 containerd[1593]: time="2025-05-27T17:50:36.070423628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"53243d9dc97b5624afc6de0fd2841c473595ebd7c28515d2c775b249faf1e198\" pid:4931 exit_status:1 exited_at:{seconds:1748368236 nanos:69521675}" May 27 17:50:37.727219 systemd-networkd[1492]: lxc_health: Link UP May 27 17:50:37.728912 systemd-networkd[1492]: lxc_health: Gained carrier May 27 17:50:37.841481 kubelet[2732]: E0527 17:50:37.841426 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:38.217692 containerd[1593]: time="2025-05-27T17:50:38.217618287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"43cf6fcd35d5d9546326589532a399d00562f81e9220660d8f9bb2459ae023c0\" pid:5319 exited_at:{seconds:1748368238 nanos:217057472}" May 27 17:50:38.838124 kubelet[2732]: E0527 17:50:38.838069 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:39.509659 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 27 17:50:40.333571 containerd[1593]: time="2025-05-27T17:50:40.333505759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"560e1bd65035ff9e2c49a874c7fd97129b6258d59db62acb27795193912a6dcd\" pid:5357 exited_at:{seconds:1748368240 nanos:333106862}" May 27 17:50:41.567618 kubelet[2732]: E0527 17:50:41.567582 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:50:42.441610 containerd[1593]: time="2025-05-27T17:50:42.441557773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"ccf6e66bc00d24daf6cfc73bcb1ed4c9b9f98c6c266406a679a0cc683b5b237b\" pid:5391 exited_at:{seconds:1748368242 nanos:441152124}" May 27 17:50:44.540532 containerd[1593]: time="2025-05-27T17:50:44.540464001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5f0c0b13a9b79c07d723a9b0584691325238534d9997779e419fd2ad23e9ea3\" id:\"2e497ec5586562fc6e1407b63e26cd107c9c4e7cf5bf28823f46ee0be7b07696\" pid:5416 exited_at:{seconds:1748368244 nanos:539942933}" May 27 17:50:44.547139 sshd[4522]: Connection closed by 10.0.0.1 port 55508 May 27 17:50:44.547624 sshd-session[4520]: pam_unix(sshd:session): session closed for user core May 27 17:50:44.551776 systemd[1]: sshd@26-10.0.0.132:22-10.0.0.1:55508.service: Deactivated successfully. May 27 17:50:44.553888 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:50:44.554926 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. May 27 17:50:44.556203 systemd-logind[1567]: Removed session 27.