May 9 00:37:22.871997 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:37:22.872018 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:37:22.872029 kernel: BIOS-provided physical RAM map: May 9 00:37:22.872036 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 9 00:37:22.872042 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 9 00:37:22.872048 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 9 00:37:22.872055 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 9 00:37:22.872061 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 9 00:37:22.872067 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:37:22.872076 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 9 00:37:22.872082 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 00:37:22.872090 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 9 00:37:22.872099 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 00:37:22.872107 kernel: NX (Execute Disable) protection: active May 9 00:37:22.872117 kernel: APIC: Static calls initialized May 9 00:37:22.872126 kernel: SMBIOS 2.8 present. May 9 00:37:22.872133 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 9 00:37:22.872140 kernel: Hypervisor detected: KVM May 9 00:37:22.872146 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:37:22.872153 kernel: kvm-clock: using sched offset of 2251412927 cycles May 9 00:37:22.872160 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:37:22.872167 kernel: tsc: Detected 2794.748 MHz processor May 9 00:37:22.872174 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:37:22.872181 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:37:22.872188 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 9 00:37:22.872197 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 9 00:37:22.872204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:37:22.872211 kernel: Using GB pages for direct mapping May 9 00:37:22.872219 kernel: ACPI: Early table checksum verification disabled May 9 00:37:22.872227 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 9 00:37:22.872234 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872243 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872251 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872260 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 9 00:37:22.872266 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872273 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872280 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872287 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:37:22.872293 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 9 00:37:22.872301 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 9 00:37:22.872311 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 9 00:37:22.872320 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 9 00:37:22.872327 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 9 00:37:22.872334 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 9 00:37:22.872341 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 9 00:37:22.872348 kernel: No NUMA configuration found May 9 00:37:22.872355 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 9 00:37:22.872362 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 9 00:37:22.872372 kernel: Zone ranges: May 9 00:37:22.872379 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:37:22.872386 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 9 00:37:22.872393 kernel: Normal empty May 9 00:37:22.872400 kernel: Movable zone start for each node May 9 00:37:22.872407 kernel: Early memory node ranges May 9 00:37:22.872414 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 9 00:37:22.872421 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 9 00:37:22.872428 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 9 00:37:22.872437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:37:22.872444 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 9 00:37:22.872451 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 9 00:37:22.872459 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:37:22.872466 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:37:22.872473 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:37:22.872480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:37:22.872487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:37:22.872494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:37:22.872503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:37:22.872510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:37:22.872517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:37:22.872525 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:37:22.872532 kernel: TSC deadline timer available May 9 00:37:22.872539 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:37:22.872546 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:37:22.872553 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:37:22.872560 kernel: kvm-guest: setup PV sched yield May 9 00:37:22.872567 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 9 00:37:22.872576 kernel: Booting paravirtualized kernel on KVM May 9 00:37:22.872584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:37:22.872608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:37:22.872615 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:37:22.872622 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:37:22.872629 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:37:22.872636 kernel: kvm-guest: PV spinlocks enabled May 9 00:37:22.872643 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:37:22.872651 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:37:22.872662 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:37:22.872669 kernel: random: crng init done May 9 00:37:22.872676 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:37:22.872684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:37:22.872691 kernel: Fallback order for Node 0: 0 May 9 00:37:22.872698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 9 00:37:22.872705 kernel: Policy zone: DMA32 May 9 00:37:22.872712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:37:22.872722 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136904K reserved, 0K cma-reserved) May 9 00:37:22.872729 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:37:22.872736 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:37:22.872743 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:37:22.872750 kernel: Dynamic Preempt: voluntary May 9 00:37:22.872757 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:37:22.872765 kernel: rcu: RCU event tracing is enabled. May 9 00:37:22.872773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:37:22.872780 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:37:22.872789 kernel: Rude variant of Tasks RCU enabled. May 9 00:37:22.872797 kernel: Tracing variant of Tasks RCU enabled. May 9 00:37:22.872804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:37:22.872811 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:37:22.872818 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:37:22.872825 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:37:22.872832 kernel: Console: colour VGA+ 80x25 May 9 00:37:22.872839 kernel: printk: console [ttyS0] enabled May 9 00:37:22.872846 kernel: ACPI: Core revision 20230628 May 9 00:37:22.872865 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:37:22.872872 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:37:22.872880 kernel: x2apic enabled May 9 00:37:22.872887 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:37:22.872894 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:37:22.872901 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:37:22.872908 kernel: kvm-guest: setup PV IPIs May 9 00:37:22.872925 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:37:22.872933 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:37:22.872940 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:37:22.872948 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:37:22.872955 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:37:22.872965 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:37:22.872972 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:37:22.872980 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:37:22.872987 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:37:22.872995 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:37:22.873005 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:37:22.873012 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:37:22.873020 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:37:22.873028 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:37:22.873035 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:37:22.873043 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:37:22.873050 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:37:22.873058 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:37:22.873068 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:37:22.873075 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:37:22.873083 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:37:22.873092 kernel: Freeing SMP alternatives memory: 32K May 9 00:37:22.873103 kernel: pid_max: default: 32768 minimum: 301 May 9 00:37:22.873113 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:37:22.873120 kernel: landlock: Up and running. May 9 00:37:22.873127 kernel: SELinux: Initializing. May 9 00:37:22.873135 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:37:22.873145 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:37:22.873153 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:37:22.873160 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:37:22.873168 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:37:22.873176 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:37:22.873183 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:37:22.873190 kernel: ... version: 0 May 9 00:37:22.873198 kernel: ... bit width: 48 May 9 00:37:22.873205 kernel: ... generic registers: 6 May 9 00:37:22.873215 kernel: ... value mask: 0000ffffffffffff May 9 00:37:22.873222 kernel: ... max period: 00007fffffffffff May 9 00:37:22.873230 kernel: ... fixed-purpose events: 0 May 9 00:37:22.873237 kernel: ... event mask: 000000000000003f May 9 00:37:22.873244 kernel: signal: max sigframe size: 1776 May 9 00:37:22.873252 kernel: rcu: Hierarchical SRCU implementation. May 9 00:37:22.873259 kernel: rcu: Max phase no-delay instances is 400. May 9 00:37:22.873267 kernel: smp: Bringing up secondary CPUs ... May 9 00:37:22.873274 kernel: smpboot: x86: Booting SMP configuration: May 9 00:37:22.873284 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:37:22.873291 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:37:22.873298 kernel: smpboot: Max logical packages: 1 May 9 00:37:22.873306 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:37:22.873313 kernel: devtmpfs: initialized May 9 00:37:22.873320 kernel: x86/mm: Memory block size: 128MB May 9 00:37:22.873328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:37:22.873336 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:37:22.873343 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:37:22.873353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:37:22.873360 kernel: audit: initializing netlink subsys (disabled) May 9 00:37:22.873367 kernel: audit: type=2000 audit(1746751043.266:1): state=initialized audit_enabled=0 res=1 May 9 00:37:22.873375 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:37:22.873382 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:37:22.873389 kernel: cpuidle: using governor menu May 9 00:37:22.873397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:37:22.873404 kernel: dca service started, version 1.12.1 May 9 00:37:22.873412 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:37:22.873422 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:37:22.873430 kernel: PCI: Using configuration type 1 for base access May 9 00:37:22.873437 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:37:22.873445 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:37:22.873452 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:37:22.873460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:37:22.873467 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:37:22.873475 kernel: ACPI: Added _OSI(Module Device) May 9 00:37:22.873482 kernel: ACPI: Added _OSI(Processor Device) May 9 00:37:22.873492 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:37:22.873499 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:37:22.873507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:37:22.873514 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:37:22.873521 kernel: ACPI: Interpreter enabled May 9 00:37:22.873529 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:37:22.873536 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:37:22.873544 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:37:22.873551 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:37:22.873561 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:37:22.873568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:37:22.873757 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:37:22.873895 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:37:22.874016 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:37:22.874026 kernel: PCI host bridge to bus 0000:00 May 9 00:37:22.874162 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:37:22.874284 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:37:22.874394 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:37:22.874505 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:37:22.874641 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:37:22.874756 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 9 00:37:22.874876 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:37:22.875018 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:37:22.875166 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:37:22.875295 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 9 00:37:22.875414 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 9 00:37:22.875533 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 9 00:37:22.875740 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:37:22.875878 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:37:22.876007 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 9 00:37:22.876136 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 9 00:37:22.876259 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 9 00:37:22.876402 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:37:22.876525 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 9 00:37:22.876678 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 9 00:37:22.876800 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 9 00:37:22.876945 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:37:22.877067 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 9 00:37:22.877198 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 9 00:37:22.877320 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 9 00:37:22.877439 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 9 00:37:22.877567 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:37:22.877785 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:37:22.877927 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:37:22.878046 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 9 00:37:22.878173 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 9 00:37:22.878300 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:37:22.878419 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 9 00:37:22.878429 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:37:22.878437 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:37:22.878448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:37:22.878456 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:37:22.878463 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:37:22.878471 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:37:22.878478 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:37:22.878486 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:37:22.878493 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:37:22.878501 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:37:22.878508 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:37:22.878518 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:37:22.878526 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:37:22.878533 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:37:22.878541 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:37:22.878548 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:37:22.878556 kernel: iommu: Default domain type: Translated May 9 00:37:22.878563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:37:22.878571 kernel: PCI: Using ACPI for IRQ routing May 9 00:37:22.878578 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:37:22.878603 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 9 00:37:22.878623 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 9 00:37:22.878747 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:37:22.878874 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:37:22.878993 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:37:22.879004 kernel: vgaarb: loaded May 9 00:37:22.879011 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:37:22.879019 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:37:22.879031 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:37:22.879038 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:37:22.879046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:37:22.879053 kernel: pnp: PnP ACPI init May 9 00:37:22.879196 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:37:22.879208 kernel: pnp: PnP ACPI: found 6 devices May 9 00:37:22.879216 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:37:22.879223 kernel: NET: Registered PF_INET protocol family May 9 00:37:22.879237 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:37:22.879245 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:37:22.879254 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:37:22.879262 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:37:22.879270 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:37:22.879278 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:37:22.879286 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:37:22.879294 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:37:22.879302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:37:22.879312 kernel: NET: Registered PF_XDP protocol family May 9 00:37:22.879422 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:37:22.879547 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:37:22.879706 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:37:22.879857 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:37:22.880006 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:37:22.880154 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 9 00:37:22.880171 kernel: PCI: CLS 0 bytes, default 64 May 9 00:37:22.880188 kernel: Initialise system trusted keyrings May 9 00:37:22.880198 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:37:22.880209 kernel: Key type asymmetric registered May 9 00:37:22.880219 kernel: Asymmetric key parser 'x509' registered May 9 00:37:22.880229 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:37:22.880242 kernel: io scheduler mq-deadline registered May 9 00:37:22.880253 kernel: io scheduler kyber registered May 9 00:37:22.880265 kernel: io scheduler bfq registered May 9 00:37:22.880275 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:37:22.880291 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:37:22.880301 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:37:22.880312 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:37:22.880322 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:37:22.880333 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:37:22.880343 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:37:22.880354 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:37:22.880364 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:37:22.880529 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:37:22.880765 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:37:22.880782 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:37:22.880939 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:37:22 UTC (1746751042) May 9 00:37:22.881088 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:37:22.881104 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:37:22.881114 kernel: NET: Registered PF_INET6 protocol family May 9 00:37:22.881125 kernel: Segment Routing with IPv6 May 9 00:37:22.881135 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:37:22.881152 kernel: NET: Registered PF_PACKET protocol family May 9 00:37:22.881162 kernel: Key type dns_resolver registered May 9 00:37:22.881172 kernel: IPI shorthand broadcast: enabled May 9 00:37:22.881182 kernel: sched_clock: Marking stable (556002324, 105043516)->(715035903, -53990063) May 9 00:37:22.881192 kernel: registered taskstats version 1 May 9 00:37:22.881202 kernel: Loading compiled-in X.509 certificates May 9 00:37:22.881213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:37:22.881223 kernel: Key type .fscrypt registered May 9 00:37:22.881233 kernel: Key type fscrypt-provisioning registered May 9 00:37:22.881248 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:37:22.881258 kernel: ima: Allocated hash algorithm: sha1 May 9 00:37:22.881269 kernel: ima: No architecture policies found May 9 00:37:22.881279 kernel: clk: Disabling unused clocks May 9 00:37:22.881289 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:37:22.881299 kernel: Write protecting the kernel read-only data: 36864k May 9 00:37:22.881309 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:37:22.881320 kernel: Run /init as init process May 9 00:37:22.881330 kernel: with arguments: May 9 00:37:22.881345 kernel: /init May 9 00:37:22.881355 kernel: with environment: May 9 00:37:22.881365 kernel: HOME=/ May 9 00:37:22.881375 kernel: TERM=linux May 9 00:37:22.881386 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:37:22.881399 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:37:22.881412 systemd[1]: Detected virtualization kvm. May 9 00:37:22.881423 systemd[1]: Detected architecture x86-64. May 9 00:37:22.881439 systemd[1]: Running in initrd. May 9 00:37:22.881450 systemd[1]: No hostname configured, using default hostname. May 9 00:37:22.881461 systemd[1]: Hostname set to . May 9 00:37:22.881472 systemd[1]: Initializing machine ID from VM UUID. May 9 00:37:22.881483 systemd[1]: Queued start job for default target initrd.target. May 9 00:37:22.881494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:37:22.881505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:37:22.881517 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:37:22.881533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:37:22.881560 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:37:22.881575 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:37:22.881604 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:37:22.881621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:37:22.881633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:37:22.881644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:37:22.881655 systemd[1]: Reached target paths.target - Path Units. May 9 00:37:22.881667 systemd[1]: Reached target slices.target - Slice Units. May 9 00:37:22.881678 systemd[1]: Reached target swap.target - Swaps. May 9 00:37:22.881690 systemd[1]: Reached target timers.target - Timer Units. May 9 00:37:22.881701 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:37:22.881713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:37:22.881728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:37:22.881740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:37:22.881751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:37:22.881762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:37:22.881778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:37:22.881789 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:37:22.881800 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:37:22.881811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:37:22.881825 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:37:22.881836 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:37:22.881856 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:37:22.881868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:37:22.881879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:37:22.881891 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:37:22.881902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:37:22.881914 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:37:22.881930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:37:22.881964 systemd-journald[193]: Collecting audit messages is disabled. May 9 00:37:22.881995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:37:22.882011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:37:22.882023 systemd-journald[193]: Journal started May 9 00:37:22.882051 systemd-journald[193]: Runtime Journal (/run/log/journal/8a87922653b24cf5b5d6aa1969539a17) is 6.0M, max 48.4M, 42.3M free. May 9 00:37:22.889050 systemd-modules-load[194]: Inserted module 'overlay' May 9 00:37:22.916075 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:37:22.915644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:37:22.921607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:37:22.923695 systemd-modules-load[194]: Inserted module 'br_netfilter' May 9 00:37:22.924635 kernel: Bridge firewalling registered May 9 00:37:22.927355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:37:22.929276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:37:22.931312 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:37:22.936155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:37:22.938795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:37:22.943182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:37:22.950822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:37:22.953204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:37:22.961944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:37:22.965965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:37:22.972381 dracut-cmdline[225]: dracut-dracut-053 May 9 00:37:22.975466 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:37:23.012891 systemd-resolved[231]: Positive Trust Anchors: May 9 00:37:23.012914 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:37:23.012945 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:37:23.023761 systemd-resolved[231]: Defaulting to hostname 'linux'. May 9 00:37:23.025827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:37:23.028374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:37:23.064648 kernel: SCSI subsystem initialized May 9 00:37:23.073632 kernel: Loading iSCSI transport class v2.0-870. May 9 00:37:23.083653 kernel: iscsi: registered transport (tcp) May 9 00:37:23.105645 kernel: iscsi: registered transport (qla4xxx) May 9 00:37:23.105725 kernel: QLogic iSCSI HBA Driver May 9 00:37:23.156931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:37:23.167743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:37:23.193934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:37:23.193992 kernel: device-mapper: uevent: version 1.0.3 May 9 00:37:23.194965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:37:23.236631 kernel: raid6: avx2x4 gen() 27032 MB/s May 9 00:37:23.253616 kernel: raid6: avx2x2 gen() 28658 MB/s May 9 00:37:23.270697 kernel: raid6: avx2x1 gen() 23588 MB/s May 9 00:37:23.270724 kernel: raid6: using algorithm avx2x2 gen() 28658 MB/s May 9 00:37:23.288702 kernel: raid6: .... xor() 19878 MB/s, rmw enabled May 9 00:37:23.288719 kernel: raid6: using avx2x2 recovery algorithm May 9 00:37:23.308616 kernel: xor: automatically using best checksumming function avx May 9 00:37:23.463637 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:37:23.477930 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:37:23.486715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:37:23.499576 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 9 00:37:23.504174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:37:23.521726 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:37:23.537103 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 9 00:37:23.571767 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:37:23.579785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:37:23.641517 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:37:23.649745 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:37:23.664083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:37:23.666924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:37:23.670017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:37:23.672672 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:37:23.675608 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:37:23.688704 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:37:23.688751 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:37:23.686793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:37:23.697614 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:37:23.697647 kernel: GPT:9289727 != 19775487 May 9 00:37:23.697659 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:37:23.697677 kernel: GPT:9289727 != 19775487 May 9 00:37:23.697686 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:37:23.697696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:37:23.700780 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:37:23.702111 kernel: libata version 3.00 loaded. May 9 00:37:23.709960 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:37:23.710176 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:37:23.711291 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:37:23.711131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:37:23.717655 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:37:23.717819 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:37:23.717841 kernel: scsi host0: ahci May 9 00:37:23.718001 kernel: scsi host1: ahci May 9 00:37:23.718912 kernel: AES CTR mode by8 optimization enabled May 9 00:37:23.718934 kernel: scsi host2: ahci May 9 00:37:23.711260 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:37:23.722337 kernel: scsi host3: ahci May 9 00:37:23.722518 kernel: scsi host4: ahci May 9 00:37:23.722698 kernel: scsi host5: ahci May 9 00:37:23.722858 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 9 00:37:23.718303 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:37:23.734555 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 9 00:37:23.734580 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 9 00:37:23.734603 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 9 00:37:23.734619 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 9 00:37:23.734629 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 9 00:37:23.722563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:37:23.739282 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 9 00:37:23.739302 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (459) May 9 00:37:23.723183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:37:23.730220 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:37:23.740934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:37:23.765175 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:37:23.790424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:37:23.798640 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:37:23.806631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:37:23.813148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:37:23.816169 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:37:23.832792 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:37:23.836385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:37:23.842793 disk-uuid[554]: Primary Header is updated. May 9 00:37:23.842793 disk-uuid[554]: Secondary Entries is updated. May 9 00:37:23.842793 disk-uuid[554]: Secondary Header is updated. May 9 00:37:23.846548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:37:23.857458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:37:24.036806 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:37:24.036888 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:37:24.036899 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:37:24.038637 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:37:24.038719 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:37:24.039624 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:37:24.040626 kernel: ata3.00: applying bridge limits May 9 00:37:24.040650 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:37:24.041621 kernel: ata3.00: configured for UDMA/100 May 9 00:37:24.042648 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:37:24.086620 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:37:24.086890 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:37:24.100614 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:37:24.856637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:37:24.857141 disk-uuid[558]: The operation has completed successfully. May 9 00:37:24.914955 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:37:24.915093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:37:24.926716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:37:24.932682 sh[592]: Success May 9 00:37:24.945619 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:37:24.976832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:37:24.988066 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:37:24.991138 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:37:25.002045 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:37:25.002079 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:37:25.002090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:37:25.003204 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:37:25.004044 kernel: BTRFS info (device dm-0): using free space tree May 9 00:37:25.009042 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:37:25.011565 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:37:25.026752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:37:25.028469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:37:25.037416 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:37:25.037459 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:37:25.037476 kernel: BTRFS info (device vda6): using free space tree May 9 00:37:25.040620 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:37:25.050215 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:37:25.052255 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:37:25.061178 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:37:25.066747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:37:25.294828 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:37:25.303918 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:37:25.315926 ignition[684]: Ignition 2.19.0 May 9 00:37:25.315939 ignition[684]: Stage: fetch-offline May 9 00:37:25.315978 ignition[684]: no configs at "/usr/lib/ignition/base.d" May 9 00:37:25.315991 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:25.316090 ignition[684]: parsed url from cmdline: "" May 9 00:37:25.316100 ignition[684]: no config URL provided May 9 00:37:25.316105 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:37:25.316114 ignition[684]: no config at "/usr/lib/ignition/user.ign" May 9 00:37:25.316146 ignition[684]: op(1): [started] loading QEMU firmware config module May 9 00:37:25.316152 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:37:25.331969 ignition[684]: op(1): [finished] loading QEMU firmware config module May 9 00:37:25.333281 systemd-networkd[776]: lo: Link UP May 9 00:37:25.333285 systemd-networkd[776]: lo: Gained carrier May 9 00:37:25.334890 systemd-networkd[776]: Enumeration completed May 9 00:37:25.335287 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:37:25.335291 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:37:25.335296 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:37:25.336627 systemd-networkd[776]: eth0: Link UP May 9 00:37:25.336631 systemd-networkd[776]: eth0: Gained carrier May 9 00:37:25.336637 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:37:25.337237 systemd[1]: Reached target network.target - Network. May 9 00:37:25.361640 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:37:25.390049 ignition[684]: parsing config with SHA512: c588843f8f9a8422553aa5df14628a32bf20bade091bda82f5655cd56ce80852683b326ebaaf185ff9c10722e55eba1ec08935655d23f271eaed4a799731cf36 May 9 00:37:25.396000 unknown[684]: fetched base config from "system" May 9 00:37:25.396246 unknown[684]: fetched user config from "qemu" May 9 00:37:25.396837 ignition[684]: fetch-offline: fetch-offline passed May 9 00:37:25.396984 ignition[684]: Ignition finished successfully May 9 00:37:25.402016 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:37:25.402264 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:37:25.411773 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:37:25.430371 ignition[784]: Ignition 2.19.0 May 9 00:37:25.430386 ignition[784]: Stage: kargs May 9 00:37:25.431327 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 9 00:37:25.431345 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:25.435230 ignition[784]: kargs: kargs passed May 9 00:37:25.436049 ignition[784]: Ignition finished successfully May 9 00:37:25.440298 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:37:25.444748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:37:25.461323 ignition[791]: Ignition 2.19.0 May 9 00:37:25.461332 ignition[791]: Stage: disks May 9 00:37:25.461498 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 9 00:37:25.461508 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:25.462355 ignition[791]: disks: disks passed May 9 00:37:25.462393 ignition[791]: Ignition finished successfully May 9 00:37:25.465525 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:37:25.467259 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:37:25.469347 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:37:25.470634 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:37:25.472662 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:37:25.473700 systemd[1]: Reached target basic.target - Basic System. May 9 00:37:25.492740 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:37:25.554096 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:37:25.753305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:37:25.762716 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:37:25.872616 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:37:25.872711 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:37:25.874215 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:37:25.890684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:37:25.891738 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:37:25.893626 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:37:25.893674 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:37:25.893703 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:37:25.900690 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:37:25.904016 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:37:25.909614 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) May 9 00:37:25.911612 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:37:25.911633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:37:25.911650 kernel: BTRFS info (device vda6): using free space tree May 9 00:37:25.914606 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:37:25.917168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:37:25.952192 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:37:26.034291 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 9 00:37:26.038978 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:37:26.043216 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:37:26.136024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:37:26.140792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:37:26.142917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:37:26.153337 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:37:26.155292 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:37:26.171392 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:37:26.202142 ignition[924]: INFO : Ignition 2.19.0 May 9 00:37:26.202142 ignition[924]: INFO : Stage: mount May 9 00:37:26.203988 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:37:26.203988 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:26.206641 ignition[924]: INFO : mount: mount passed May 9 00:37:26.207420 ignition[924]: INFO : Ignition finished successfully May 9 00:37:26.210096 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:37:26.217707 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:37:26.224265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:37:26.236501 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) May 9 00:37:26.236559 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:37:26.236572 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:37:26.238015 kernel: BTRFS info (device vda6): using free space tree May 9 00:37:26.240609 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:37:26.241631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:37:26.267494 ignition[956]: INFO : Ignition 2.19.0 May 9 00:37:26.267494 ignition[956]: INFO : Stage: files May 9 00:37:26.269443 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:37:26.269443 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:26.269443 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 9 00:37:26.269443 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:37:26.274681 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:37:26.276228 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:37:26.277666 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:37:26.279295 unknown[956]: wrote ssh authorized keys file for user: core May 9 00:37:26.280444 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:37:26.282006 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:37:26.282006 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:37:26.325316 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:37:26.611921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:37:26.611921 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:37:26.616789 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:37:26.757917 systemd-networkd[776]: eth0: Gained IPv6LL May 9 00:37:27.077546 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:37:27.251246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:37:27.251246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:37:27.255066 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 9 00:37:27.557995 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:37:28.289959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:37:28.292546 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 00:37:28.294137 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:37:28.296605 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:37:28.296605 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 00:37:28.296605 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 00:37:28.301348 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:37:28.303467 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:37:28.303467 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 00:37:28.306908 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:37:28.333293 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:37:28.340219 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:37:28.341939 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:37:28.341939 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 9 00:37:28.344758 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:37:28.346250 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:37:28.348024 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:37:28.349697 ignition[956]: INFO : files: files passed May 9 00:37:28.350442 ignition[956]: INFO : Ignition finished successfully May 9 00:37:28.353035 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:37:28.361741 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:37:28.363561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:37:28.367741 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:37:28.368789 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:37:28.373238 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:37:28.377014 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:37:28.378686 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:37:28.380234 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:37:28.383075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:37:28.384519 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:37:28.392820 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:37:28.414684 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:37:28.414821 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:37:28.417433 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:37:28.419195 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:37:28.421240 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:37:28.422043 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:37:28.438638 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:37:28.448716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:37:28.458398 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:37:28.459686 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:37:28.461901 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:37:28.463921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:37:28.464028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:37:28.466396 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:37:28.467999 systemd[1]: Stopped target basic.target - Basic System. May 9 00:37:28.470087 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:37:28.472153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:37:28.474220 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:37:28.476369 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:37:28.478508 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:37:28.480817 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:37:28.482911 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:37:28.485138 systemd[1]: Stopped target swap.target - Swaps. May 9 00:37:28.487044 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:37:28.487225 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:37:28.489549 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:37:28.491112 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:37:28.493231 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:37:28.493362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:37:28.495468 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:37:28.495571 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:37:28.498021 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:37:28.498134 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:37:28.500117 systemd[1]: Stopped target paths.target - Path Units. May 9 00:37:28.501936 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:37:28.505666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:37:28.507746 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:37:28.509761 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:37:28.511578 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:37:28.511681 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:37:28.513882 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:37:28.514012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:37:28.516469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:37:28.516585 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:37:28.518761 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:37:28.518865 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:37:28.529738 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:37:28.530945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:37:28.532691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:37:28.532864 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:37:28.535818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:37:28.535965 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:37:28.541892 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:37:28.542043 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:37:28.552123 ignition[1009]: INFO : Ignition 2.19.0 May 9 00:37:28.552123 ignition[1009]: INFO : Stage: umount May 9 00:37:28.553850 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:37:28.553850 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:37:28.553850 ignition[1009]: INFO : umount: umount passed May 9 00:37:28.553850 ignition[1009]: INFO : Ignition finished successfully May 9 00:37:28.555270 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:37:28.555407 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:37:28.558440 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:37:28.558861 systemd[1]: Stopped target network.target - Network. May 9 00:37:28.560374 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:37:28.560428 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:37:28.560526 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:37:28.560573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:37:28.560925 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:37:28.560974 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:37:28.561255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:37:28.561296 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:37:28.561753 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:37:28.562308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:37:28.570629 systemd-networkd[776]: eth0: DHCPv6 lease lost May 9 00:37:28.572311 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:37:28.572483 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:37:28.575630 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:37:28.575791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:37:28.578295 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:37:28.578376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:37:28.589700 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:37:28.590912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:37:28.590966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:37:28.593563 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:37:28.593631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:37:28.595614 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:37:28.595664 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:37:28.597846 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:37:28.597892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:37:28.600444 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:37:28.610446 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:37:28.610650 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:37:28.613003 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:37:28.613110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:37:28.615460 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:37:28.615537 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:37:28.616995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:37:28.617034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:37:28.618985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:37:28.619037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:37:28.639300 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:37:28.639353 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:37:28.641332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:37:28.641378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:37:28.653721 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:37:28.670577 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:37:28.670655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:37:28.670761 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:37:28.670806 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:37:28.671086 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:37:28.671129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:37:28.671426 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:37:28.671467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:37:28.672354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:37:28.672460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:37:28.807325 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:37:28.807469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:37:28.809567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:37:28.811313 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:37:28.811369 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:37:28.830900 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:37:28.839755 systemd[1]: Switching root. May 9 00:37:28.872408 systemd-journald[193]: Journal stopped May 9 00:37:30.240557 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 9 00:37:30.240791 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:37:30.240812 kernel: SELinux: policy capability open_perms=1 May 9 00:37:30.240823 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:37:30.240834 kernel: SELinux: policy capability always_check_network=0 May 9 00:37:30.240845 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:37:30.240857 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:37:30.240868 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:37:30.240879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:37:30.240896 kernel: audit: type=1403 audit(1746751049.449:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:37:30.240910 systemd[1]: Successfully loaded SELinux policy in 44.551ms. May 9 00:37:30.240935 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.415ms. May 9 00:37:30.240948 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:37:30.240960 systemd[1]: Detected virtualization kvm. May 9 00:37:30.240972 systemd[1]: Detected architecture x86-64. May 9 00:37:30.240991 systemd[1]: Detected first boot. May 9 00:37:30.241003 systemd[1]: Initializing machine ID from VM UUID. May 9 00:37:30.241020 zram_generator::config[1053]: No configuration found. May 9 00:37:30.241037 systemd[1]: Populated /etc with preset unit settings. May 9 00:37:30.241048 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:37:30.241060 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:37:30.241072 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:37:30.241084 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:37:30.241096 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:37:30.241109 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:37:30.241121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:37:30.241133 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:37:30.241148 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:37:30.241160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:37:30.241172 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:37:30.241184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:37:30.241196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:37:30.241208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:37:30.241219 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:37:30.241231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:37:30.241244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:37:30.241259 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:37:30.241271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:37:30.241283 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:37:30.241295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:37:30.241306 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:37:30.241318 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:37:30.241330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:37:30.241344 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:37:30.241356 systemd[1]: Reached target slices.target - Slice Units. May 9 00:37:30.241368 systemd[1]: Reached target swap.target - Swaps. May 9 00:37:30.241379 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:37:30.241391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:37:30.241403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:37:30.241414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:37:30.241426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:37:30.241438 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:37:30.241450 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:37:30.241464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:37:30.241476 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:37:30.241487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:30.241499 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:37:30.241512 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:37:30.241524 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:37:30.241536 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:37:30.241548 systemd[1]: Reached target machines.target - Containers. May 9 00:37:30.241562 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:37:30.241574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:37:30.241606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:37:30.241618 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:37:30.241630 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:37:30.241657 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:37:30.241670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:37:30.241682 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:37:30.241693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:37:30.241708 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:37:30.241721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:37:30.241733 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:37:30.241744 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:37:30.241756 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:37:30.241768 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:37:30.241780 kernel: fuse: init (API version 7.39) May 9 00:37:30.241791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:37:30.241806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:37:30.241821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:37:30.241833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:37:30.241844 kernel: ACPI: bus type drm_connector registered May 9 00:37:30.241856 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:37:30.241886 systemd-journald[1127]: Collecting audit messages is disabled. May 9 00:37:30.241913 systemd[1]: Stopped verity-setup.service. May 9 00:37:30.241925 kernel: loop: module loaded May 9 00:37:30.241939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:30.241951 systemd-journald[1127]: Journal started May 9 00:37:30.241973 systemd-journald[1127]: Runtime Journal (/run/log/journal/8a87922653b24cf5b5d6aa1969539a17) is 6.0M, max 48.4M, 42.3M free. May 9 00:37:30.024091 systemd[1]: Queued start job for default target multi-user.target. May 9 00:37:30.042512 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:37:30.042972 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:37:30.246506 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:37:30.247377 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:37:30.248632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:37:30.249887 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:37:30.251021 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:37:30.252257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:37:30.253508 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:37:30.254781 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:37:30.256271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:37:30.257876 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:37:30.258050 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:37:30.259609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:37:30.259786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:37:30.261241 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:37:30.261415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:37:30.263127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:37:30.263300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:37:30.264875 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:37:30.265047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:37:30.266559 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:37:30.266755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:37:30.268189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:37:30.269657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:37:30.271367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:37:30.341840 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:37:30.353694 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:37:30.356112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:37:30.357294 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:37:30.357325 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:37:30.359414 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:37:30.361791 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:37:30.364021 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:37:30.365195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:37:30.366582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:37:30.368794 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:37:30.370090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:37:30.371933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:37:30.373678 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:37:30.375612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:37:30.379027 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:37:30.384683 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:37:30.387604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:37:30.389133 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:37:30.390498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:37:30.392538 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:37:30.442686 systemd-journald[1127]: Time spent on flushing to /var/log/journal/8a87922653b24cf5b5d6aa1969539a17 is 16.965ms for 958 entries. May 9 00:37:30.442686 systemd-journald[1127]: System Journal (/var/log/journal/8a87922653b24cf5b5d6aa1969539a17) is 8.0M, max 195.6M, 187.6M free. May 9 00:37:30.469115 kernel: loop0: detected capacity change from 0 to 140768 May 9 00:37:30.469139 systemd-journald[1127]: Received client request to flush runtime journal. May 9 00:37:30.448148 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:37:30.450507 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:37:30.460254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:37:30.470407 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:37:30.472295 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:37:30.474516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:37:30.480481 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:37:30.488504 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 9 00:37:30.488889 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 9 00:37:30.494921 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:37:30.526466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:37:30.530690 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:37:30.534714 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:37:30.535986 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:37:30.553617 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:37:30.555769 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:37:30.564096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:37:30.581773 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 9 00:37:30.581794 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 9 00:37:30.588870 kernel: loop2: detected capacity change from 0 to 205544 May 9 00:37:30.587540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:37:30.619632 kernel: loop3: detected capacity change from 0 to 140768 May 9 00:37:30.633617 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:37:30.642629 kernel: loop5: detected capacity change from 0 to 205544 May 9 00:37:30.647924 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:37:30.649456 (sd-merge)[1196]: Merged extensions into '/usr'. May 9 00:37:30.726708 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:37:30.726728 systemd[1]: Reloading... May 9 00:37:30.775622 zram_generator::config[1219]: No configuration found. May 9 00:37:30.933070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:37:30.980204 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:37:30.988129 systemd[1]: Reloading finished in 260 ms. May 9 00:37:31.035385 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:37:31.037095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:37:31.051759 systemd[1]: Starting ensure-sysext.service... May 9 00:37:31.054184 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:37:31.062853 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... May 9 00:37:31.062869 systemd[1]: Reloading... May 9 00:37:31.142297 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:37:31.142662 zram_generator::config[1283]: No configuration found. May 9 00:37:31.145002 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:37:31.150807 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:37:31.151119 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 9 00:37:31.151196 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 9 00:37:31.154780 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:37:31.154867 systemd-tmpfiles[1260]: Skipping /boot May 9 00:37:31.166142 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:37:31.166215 systemd-tmpfiles[1260]: Skipping /boot May 9 00:37:31.266710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:37:31.316744 systemd[1]: Reloading finished in 253 ms. May 9 00:37:31.335105 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:37:31.336825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:37:31.356913 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:37:31.359436 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:37:31.361780 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:37:31.367776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:37:31.379786 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:37:31.382447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:37:31.387159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:31.387322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:37:31.389796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:37:31.402249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:37:31.405508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:37:31.406804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:37:31.414508 augenrules[1350]: No rules May 9 00:37:31.416837 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:37:31.417983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:31.419152 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:37:31.420571 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 9 00:37:31.421189 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:37:31.423193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:37:31.423420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:37:31.425254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:37:31.425425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:37:31.427552 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:37:31.427783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:37:31.437706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:37:31.440018 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:37:31.446072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:37:31.448474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:31.448987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:37:31.458010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:37:31.461879 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:37:31.464907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:37:31.468925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:37:31.470143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:37:31.478846 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:37:31.483042 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:37:31.484231 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:37:31.484365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:37:31.485829 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:37:31.487565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:37:31.487831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:37:31.503116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1373) May 9 00:37:31.502388 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:37:31.502691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:37:31.504244 systemd[1]: Finished ensure-sysext.service. May 9 00:37:31.507420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:37:31.507610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:37:31.518723 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:37:31.602766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:37:31.602859 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:37:31.605812 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:37:31.616602 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:37:31.616798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:37:31.627961 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:37:31.682135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:37:31.693732 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:37:31.710712 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:37:31.710983 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:37:31.711173 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:37:31.721571 systemd-resolved[1329]: Positive Trust Anchors: May 9 00:37:31.722974 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 9 00:37:31.721599 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:37:31.721639 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:37:31.727714 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 00:37:31.727756 kernel: ACPI: button: Power Button [PWRF] May 9 00:37:31.737959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:37:31.739848 systemd-resolved[1329]: Defaulting to hostname 'linux'. May 9 00:37:31.747662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:37:31.751837 systemd-networkd[1389]: lo: Link UP May 9 00:37:31.752124 systemd-networkd[1389]: lo: Gained carrier May 9 00:37:31.752752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:37:31.754102 systemd-networkd[1389]: Enumeration completed May 9 00:37:31.754883 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:37:31.755079 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:37:31.755145 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:37:31.759105 systemd[1]: Reached target network.target - Network. May 9 00:37:31.800709 systemd-networkd[1389]: eth0: Link UP May 9 00:37:31.800715 systemd-networkd[1389]: eth0: Gained carrier May 9 00:37:31.800794 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:37:31.803613 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:37:31.808093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:37:31.811861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:37:31.813653 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:37:31.826507 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:37:31.827898 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:37:33.196446 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:37:33.196783 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2025-05-09 00:37:33.195532 UTC. May 9 00:37:33.196880 systemd-resolved[1329]: Clock change detected. Flushing caches. May 9 00:37:33.280847 kernel: kvm_amd: TSC scaling supported May 9 00:37:33.280931 kernel: kvm_amd: Nested Virtualization enabled May 9 00:37:33.280981 kernel: kvm_amd: Nested Paging enabled May 9 00:37:33.281223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:37:33.281357 kernel: kvm_amd: LBR virtualization supported May 9 00:37:33.281373 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:37:33.281385 kernel: kvm_amd: Virtual GIF supported May 9 00:37:33.304551 kernel: EDAC MC: Ver: 3.0.0 May 9 00:37:33.329632 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:37:33.338511 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:37:33.348217 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:37:33.378586 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:37:33.380198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:37:33.381400 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:37:33.382610 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:37:33.383911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:37:33.385427 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:37:33.386644 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:37:33.388071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:37:33.389371 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:37:33.389405 systemd[1]: Reached target paths.target - Path Units. May 9 00:37:33.390350 systemd[1]: Reached target timers.target - Timer Units. May 9 00:37:33.392248 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:37:33.395107 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:37:33.407774 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:37:33.410389 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:37:33.412003 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:37:33.413225 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:37:33.414229 systemd[1]: Reached target basic.target - Basic System. May 9 00:37:33.415245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:37:33.415275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:37:33.416410 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:37:33.418581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:37:33.421440 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:37:33.423439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:37:33.426271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:37:33.427414 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:37:33.429186 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:37:33.433031 jq[1430]: false May 9 00:37:33.436498 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:37:33.442849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:37:33.445183 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:37:33.450518 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:37:33.451860 dbus-daemon[1429]: [system] SELinux support is enabled May 9 00:37:33.452092 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:37:33.453263 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:37:33.454251 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:37:33.458248 extend-filesystems[1431]: Found loop3 May 9 00:37:33.463350 extend-filesystems[1431]: Found loop4 May 9 00:37:33.463350 extend-filesystems[1431]: Found loop5 May 9 00:37:33.463350 extend-filesystems[1431]: Found sr0 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda May 9 00:37:33.463350 extend-filesystems[1431]: Found vda1 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda2 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda3 May 9 00:37:33.463350 extend-filesystems[1431]: Found usr May 9 00:37:33.463350 extend-filesystems[1431]: Found vda4 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda6 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda7 May 9 00:37:33.463350 extend-filesystems[1431]: Found vda9 May 9 00:37:33.463350 extend-filesystems[1431]: Checking size of /dev/vda9 May 9 00:37:33.497393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1377) May 9 00:37:33.497429 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:37:33.458454 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:37:33.497542 extend-filesystems[1431]: Resized partition /dev/vda9 May 9 00:37:33.460480 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:37:33.498896 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) May 9 00:37:33.503572 update_engine[1443]: I20250509 00:37:33.483319 1443 main.cc:92] Flatcar Update Engine starting May 9 00:37:33.503572 update_engine[1443]: I20250509 00:37:33.487761 1443 update_check_scheduler.cc:74] Next update check in 3m4s May 9 00:37:33.466657 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:37:33.505607 jq[1444]: true May 9 00:37:33.469541 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:37:33.469796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:37:33.470141 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:37:33.470416 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:37:33.487091 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:37:33.487312 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:37:33.522637 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:37:33.522751 tar[1454]: linux-amd64/helm May 9 00:37:33.521748 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:37:33.544171 jq[1456]: true May 9 00:37:33.528137 systemd[1]: Started update-engine.service - Update Engine. May 9 00:37:33.545743 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:37:33.545743 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:37:33.545743 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:37:33.554523 extend-filesystems[1431]: Resized filesystem in /dev/vda9 May 9 00:37:33.546614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:37:33.546639 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:37:33.548456 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:37:33.548472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:37:33.633534 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:37:33.635627 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:37:33.635845 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:37:33.643078 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:37:33.643104 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:37:33.644625 systemd-logind[1442]: New seat seat0. May 9 00:37:33.645712 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:37:33.742938 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:37:33.748050 bash[1485]: Updated "/home/core/.ssh/authorized_keys" May 9 00:37:33.750242 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:37:33.752438 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:37:33.764870 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:37:33.775417 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:37:33.788056 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:37:33.795594 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:37:33.796380 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:37:33.799280 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:37:33.850446 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:37:33.858589 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:37:33.860979 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:37:33.862781 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:37:34.042802 containerd[1455]: time="2025-05-09T00:37:34.042617962Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:37:34.068310 containerd[1455]: time="2025-05-09T00:37:34.068239932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.070245 containerd[1455]: time="2025-05-09T00:37:34.070191362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:37:34.070245 containerd[1455]: time="2025-05-09T00:37:34.070233611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:37:34.070360 containerd[1455]: time="2025-05-09T00:37:34.070253088Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:37:34.070517 containerd[1455]: time="2025-05-09T00:37:34.070496725Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:37:34.070539 containerd[1455]: time="2025-05-09T00:37:34.070521291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.070618 containerd[1455]: time="2025-05-09T00:37:34.070598937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:37:34.070638 containerd[1455]: time="2025-05-09T00:37:34.070616249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.070837 containerd[1455]: time="2025-05-09T00:37:34.070816595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:37:34.070837 containerd[1455]: time="2025-05-09T00:37:34.070835430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.070879 containerd[1455]: time="2025-05-09T00:37:34.070848525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:37:34.070879 containerd[1455]: time="2025-05-09T00:37:34.070859846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.070979 containerd[1455]: time="2025-05-09T00:37:34.070961887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.071247 containerd[1455]: time="2025-05-09T00:37:34.071220733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:37:34.071391 containerd[1455]: time="2025-05-09T00:37:34.071365525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:37:34.071391 containerd[1455]: time="2025-05-09T00:37:34.071383819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:37:34.071526 containerd[1455]: time="2025-05-09T00:37:34.071501129Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:37:34.071602 containerd[1455]: time="2025-05-09T00:37:34.071585487Z" level=info msg="metadata content store policy set" policy=shared May 9 00:37:34.076561 containerd[1455]: time="2025-05-09T00:37:34.076532587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:37:34.076598 containerd[1455]: time="2025-05-09T00:37:34.076575838Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:37:34.076598 containerd[1455]: time="2025-05-09T00:37:34.076590535Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:37:34.076634 containerd[1455]: time="2025-05-09T00:37:34.076604742Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:37:34.076634 containerd[1455]: time="2025-05-09T00:37:34.076618528Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:37:34.076779 containerd[1455]: time="2025-05-09T00:37:34.076754252Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:37:34.076992 containerd[1455]: time="2025-05-09T00:37:34.076969366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:37:34.077108 containerd[1455]: time="2025-05-09T00:37:34.077085403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:37:34.077108 containerd[1455]: time="2025-05-09T00:37:34.077105712Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:37:34.077146 containerd[1455]: time="2025-05-09T00:37:34.077118115Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:37:34.077146 containerd[1455]: time="2025-05-09T00:37:34.077131340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077182 containerd[1455]: time="2025-05-09T00:37:34.077144785Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077182 containerd[1455]: time="2025-05-09T00:37:34.077157469Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077182 containerd[1455]: time="2025-05-09T00:37:34.077170934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077240 containerd[1455]: time="2025-05-09T00:37:34.077198906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077240 containerd[1455]: time="2025-05-09T00:37:34.077213524Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077240 containerd[1455]: time="2025-05-09T00:37:34.077225707Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077240 containerd[1455]: time="2025-05-09T00:37:34.077236236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:37:34.077316 containerd[1455]: time="2025-05-09T00:37:34.077269088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077316 containerd[1455]: time="2025-05-09T00:37:34.077283776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077316 containerd[1455]: time="2025-05-09T00:37:34.077299014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077316 containerd[1455]: time="2025-05-09T00:37:34.077310626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077424 containerd[1455]: time="2025-05-09T00:37:34.077345431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077424 containerd[1455]: time="2025-05-09T00:37:34.077360499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077424 containerd[1455]: time="2025-05-09T00:37:34.077384795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077424 containerd[1455]: time="2025-05-09T00:37:34.077406506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077424 containerd[1455]: time="2025-05-09T00:37:34.077420853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077436031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077448374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077459425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077472890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077487498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:37:34.077509 containerd[1455]: time="2025-05-09T00:37:34.077508126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077615 containerd[1455]: time="2025-05-09T00:37:34.077536009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077615 containerd[1455]: time="2025-05-09T00:37:34.077547270Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:37:34.077615 containerd[1455]: time="2025-05-09T00:37:34.077597404Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:37:34.077615 containerd[1455]: time="2025-05-09T00:37:34.077611230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:37:34.077693 containerd[1455]: time="2025-05-09T00:37:34.077622150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:37:34.077693 containerd[1455]: time="2025-05-09T00:37:34.077635585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:37:34.077693 containerd[1455]: time="2025-05-09T00:37:34.077645484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:37:34.077693 containerd[1455]: time="2025-05-09T00:37:34.077661334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:37:34.077693 containerd[1455]: time="2025-05-09T00:37:34.077673076Z" level=info msg="NRI interface is disabled by configuration." May 9 00:37:34.077790 containerd[1455]: time="2025-05-09T00:37:34.077696319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:37:34.078040 containerd[1455]: time="2025-05-09T00:37:34.077983769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:37:34.078040 containerd[1455]: time="2025-05-09T00:37:34.078038391Z" level=info msg="Connect containerd service" May 9 00:37:34.078181 containerd[1455]: time="2025-05-09T00:37:34.078097522Z" level=info msg="using legacy CRI server" May 9 00:37:34.078181 containerd[1455]: time="2025-05-09T00:37:34.078105236Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:37:34.078224 containerd[1455]: time="2025-05-09T00:37:34.078203441Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:37:34.078859 containerd[1455]: time="2025-05-09T00:37:34.078828794Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:37:34.079243 containerd[1455]: time="2025-05-09T00:37:34.079218985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:37:34.079294 containerd[1455]: time="2025-05-09T00:37:34.079273738Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:37:34.079398 containerd[1455]: time="2025-05-09T00:37:34.079281733Z" level=info msg="Start subscribing containerd event" May 9 00:37:34.079554 containerd[1455]: time="2025-05-09T00:37:34.079530319Z" level=info msg="Start recovering state" May 9 00:37:34.080108 containerd[1455]: time="2025-05-09T00:37:34.080074170Z" level=info msg="Start event monitor" May 9 00:37:34.080108 containerd[1455]: time="2025-05-09T00:37:34.080109356Z" level=info msg="Start snapshots syncer" May 9 00:37:34.080238 containerd[1455]: time="2025-05-09T00:37:34.080125826Z" level=info msg="Start cni network conf syncer for default" May 9 00:37:34.080238 containerd[1455]: time="2025-05-09T00:37:34.080137649Z" level=info msg="Start streaming server" May 9 00:37:34.080304 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:37:34.080977 containerd[1455]: time="2025-05-09T00:37:34.080562015Z" level=info msg="containerd successfully booted in 0.039092s" May 9 00:37:34.247751 tar[1454]: linux-amd64/LICENSE May 9 00:37:34.247861 tar[1454]: linux-amd64/README.md May 9 00:37:34.265728 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:37:34.843526 systemd-networkd[1389]: eth0: Gained IPv6LL May 9 00:37:34.846924 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:37:34.848801 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:37:34.862673 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:37:34.865675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:34.868199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:37:34.887892 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:37:34.888163 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:37:34.890095 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:37:34.892427 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:37:36.373292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:36.375217 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:37:36.376648 systemd[1]: Startup finished in 685ms (kernel) + 6.755s (initrd) + 5.603s (userspace) = 13.044s. May 9 00:37:36.389869 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:37:37.051751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:37:37.052975 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:34360.service - OpenSSH per-connection server daemon (10.0.0.1:34360). May 9 00:37:37.111445 kubelet[1542]: E0509 00:37:37.111384 1542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:37:37.115469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:37:37.115665 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:37:37.115978 systemd[1]: kubelet.service: Consumed 2.101s CPU time. May 9 00:37:37.116943 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 34360 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.118988 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.127933 systemd-logind[1442]: New session 1 of user core. May 9 00:37:37.129181 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:37:37.138538 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:37:37.151593 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:37:37.154197 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:37:37.163096 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:37:37.262948 systemd[1559]: Queued start job for default target default.target. May 9 00:37:37.278521 systemd[1559]: Created slice app.slice - User Application Slice. May 9 00:37:37.278553 systemd[1559]: Reached target paths.target - Paths. May 9 00:37:37.278573 systemd[1559]: Reached target timers.target - Timers. May 9 00:37:37.280264 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:37:37.294244 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:37:37.294389 systemd[1559]: Reached target sockets.target - Sockets. May 9 00:37:37.294408 systemd[1559]: Reached target basic.target - Basic System. May 9 00:37:37.294448 systemd[1559]: Reached target default.target - Main User Target. May 9 00:37:37.294532 systemd[1559]: Startup finished in 125ms. May 9 00:37:37.294979 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:37:37.297304 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:37:37.360549 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:34372.service - OpenSSH per-connection server daemon (10.0.0.1:34372). May 9 00:37:37.414530 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 34372 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.416033 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.420047 systemd-logind[1442]: New session 2 of user core. May 9 00:37:37.429470 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:37:37.481821 sshd[1570]: pam_unix(sshd:session): session closed for user core May 9 00:37:37.495475 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:34372.service: Deactivated successfully. May 9 00:37:37.497125 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:37:37.498612 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. May 9 00:37:37.508578 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:34374.service - OpenSSH per-connection server daemon (10.0.0.1:34374). May 9 00:37:37.509395 systemd-logind[1442]: Removed session 2. May 9 00:37:37.543354 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 34374 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.544840 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.548628 systemd-logind[1442]: New session 3 of user core. May 9 00:37:37.562493 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:37:37.612150 sshd[1577]: pam_unix(sshd:session): session closed for user core May 9 00:37:37.629156 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:34374.service: Deactivated successfully. May 9 00:37:37.630875 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:37:37.632468 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. May 9 00:37:37.633619 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). May 9 00:37:37.634286 systemd-logind[1442]: Removed session 3. May 9 00:37:37.672959 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.674685 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.678246 systemd-logind[1442]: New session 4 of user core. May 9 00:37:37.688435 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:37:37.741860 sshd[1584]: pam_unix(sshd:session): session closed for user core May 9 00:37:37.755161 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:34376.service: Deactivated successfully. May 9 00:37:37.756919 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:37:37.758480 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. May 9 00:37:37.759726 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:34378.service - OpenSSH per-connection server daemon (10.0.0.1:34378). May 9 00:37:37.760494 systemd-logind[1442]: Removed session 4. May 9 00:37:37.799466 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 34378 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.801015 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.805301 systemd-logind[1442]: New session 5 of user core. May 9 00:37:37.811452 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:37:37.870974 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:37:37.871323 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:37:37.887974 sudo[1594]: pam_unix(sudo:session): session closed for user root May 9 00:37:37.890169 sshd[1591]: pam_unix(sshd:session): session closed for user core May 9 00:37:37.907110 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:34378.service: Deactivated successfully. May 9 00:37:37.908896 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:37:37.910555 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. May 9 00:37:37.920562 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:34386.service - OpenSSH per-connection server daemon (10.0.0.1:34386). May 9 00:37:37.921538 systemd-logind[1442]: Removed session 5. May 9 00:37:37.956162 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 34386 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:37.957918 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:37.961944 systemd-logind[1442]: New session 6 of user core. May 9 00:37:37.971494 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:37:38.025664 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:37:38.026006 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:37:38.030246 sudo[1603]: pam_unix(sudo:session): session closed for user root May 9 00:37:38.037178 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:37:38.037559 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:37:38.053584 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:37:38.055252 auditctl[1606]: No rules May 9 00:37:38.056646 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:37:38.056932 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:37:38.058626 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:37:38.090340 augenrules[1624]: No rules May 9 00:37:38.092172 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:37:38.093782 sudo[1602]: pam_unix(sudo:session): session closed for user root May 9 00:37:38.095650 sshd[1599]: pam_unix(sshd:session): session closed for user core May 9 00:37:38.108139 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:34386.service: Deactivated successfully. May 9 00:37:38.109902 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:37:38.111505 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. May 9 00:37:38.124558 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). May 9 00:37:38.125541 systemd-logind[1442]: Removed session 6. May 9 00:37:38.159430 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:37:38.160808 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:38.164467 systemd-logind[1442]: New session 7 of user core. May 9 00:37:38.177456 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:37:38.230934 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:37:38.231286 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:37:38.544616 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:37:38.544756 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:37:38.827807 dockerd[1654]: time="2025-05-09T00:37:38.827642020Z" level=info msg="Starting up" May 9 00:37:39.393959 dockerd[1654]: time="2025-05-09T00:37:39.393902262Z" level=info msg="Loading containers: start." May 9 00:37:39.512357 kernel: Initializing XFRM netlink socket May 9 00:37:39.592137 systemd-networkd[1389]: docker0: Link UP May 9 00:37:39.618077 dockerd[1654]: time="2025-05-09T00:37:39.618042084Z" level=info msg="Loading containers: done." May 9 00:37:39.638009 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck339688236-merged.mount: Deactivated successfully. May 9 00:37:39.640292 dockerd[1654]: time="2025-05-09T00:37:39.640253326Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:37:39.640374 dockerd[1654]: time="2025-05-09T00:37:39.640327646Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:37:39.640485 dockerd[1654]: time="2025-05-09T00:37:39.640461136Z" level=info msg="Daemon has completed initialization" May 9 00:37:39.679657 dockerd[1654]: time="2025-05-09T00:37:39.679503318Z" level=info msg="API listen on /run/docker.sock" May 9 00:37:39.679668 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:37:40.646250 containerd[1455]: time="2025-05-09T00:37:40.646186119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 00:37:41.277346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348912176.mount: Deactivated successfully. May 9 00:37:42.424444 containerd[1455]: time="2025-05-09T00:37:42.424365365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:42.425115 containerd[1455]: time="2025-05-09T00:37:42.425025533Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 9 00:37:42.426258 containerd[1455]: time="2025-05-09T00:37:42.426226215Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:42.428764 containerd[1455]: time="2025-05-09T00:37:42.428720414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:42.429919 containerd[1455]: time="2025-05-09T00:37:42.429888995Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.783647794s" May 9 00:37:42.429985 containerd[1455]: time="2025-05-09T00:37:42.429923771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 9 00:37:42.431556 containerd[1455]: time="2025-05-09T00:37:42.431535233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 00:37:44.304464 containerd[1455]: time="2025-05-09T00:37:44.304362101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:44.305090 containerd[1455]: time="2025-05-09T00:37:44.304928634Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 9 00:37:44.306143 containerd[1455]: time="2025-05-09T00:37:44.306066208Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:44.310364 containerd[1455]: time="2025-05-09T00:37:44.310319215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:44.311655 containerd[1455]: time="2025-05-09T00:37:44.311594427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.880022064s" May 9 00:37:44.311703 containerd[1455]: time="2025-05-09T00:37:44.311687121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 9 00:37:44.312720 containerd[1455]: time="2025-05-09T00:37:44.312695202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 00:37:45.883678 containerd[1455]: time="2025-05-09T00:37:45.883590202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:45.884354 containerd[1455]: time="2025-05-09T00:37:45.884246984Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 9 00:37:45.885425 containerd[1455]: time="2025-05-09T00:37:45.885381722Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:45.889647 containerd[1455]: time="2025-05-09T00:37:45.889596348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:45.890804 containerd[1455]: time="2025-05-09T00:37:45.890761323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.578030054s" May 9 00:37:45.890866 containerd[1455]: time="2025-05-09T00:37:45.890804864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 9 00:37:45.891532 containerd[1455]: time="2025-05-09T00:37:45.891493596Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 00:37:47.186146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286512975.mount: Deactivated successfully. May 9 00:37:47.187538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:37:47.202695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:47.500366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:47.507906 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:37:47.650438 kubelet[1880]: E0509 00:37:47.650377 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:37:47.657929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:37:47.658288 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:37:48.789641 containerd[1455]: time="2025-05-09T00:37:48.789571232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:48.790551 containerd[1455]: time="2025-05-09T00:37:48.790445702Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 9 00:37:48.791870 containerd[1455]: time="2025-05-09T00:37:48.791835829Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:48.794051 containerd[1455]: time="2025-05-09T00:37:48.794017021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:48.795107 containerd[1455]: time="2025-05-09T00:37:48.794921707Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.903388236s" May 9 00:37:48.795318 containerd[1455]: time="2025-05-09T00:37:48.795238361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 9 00:37:48.798367 containerd[1455]: time="2025-05-09T00:37:48.798311345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:37:49.300146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214865299.mount: Deactivated successfully. May 9 00:37:50.234823 containerd[1455]: time="2025-05-09T00:37:50.234761055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.235569 containerd[1455]: time="2025-05-09T00:37:50.235488399Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:37:50.236860 containerd[1455]: time="2025-05-09T00:37:50.236823663Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.240472 containerd[1455]: time="2025-05-09T00:37:50.240433685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.241480 containerd[1455]: time="2025-05-09T00:37:50.241433240Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.443068554s" May 9 00:37:50.241480 containerd[1455]: time="2025-05-09T00:37:50.241464809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:37:50.242246 containerd[1455]: time="2025-05-09T00:37:50.242199337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:37:50.745694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183723840.mount: Deactivated successfully. May 9 00:37:50.776052 containerd[1455]: time="2025-05-09T00:37:50.775998301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.777081 containerd[1455]: time="2025-05-09T00:37:50.777033042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 9 00:37:50.778474 containerd[1455]: time="2025-05-09T00:37:50.778440622Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.780929 containerd[1455]: time="2025-05-09T00:37:50.780880108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:50.781714 containerd[1455]: time="2025-05-09T00:37:50.781673065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 539.425739ms" May 9 00:37:50.781754 containerd[1455]: time="2025-05-09T00:37:50.781713942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 9 00:37:50.782279 containerd[1455]: time="2025-05-09T00:37:50.782255688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 00:37:51.559116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025870324.mount: Deactivated successfully. May 9 00:37:54.192542 containerd[1455]: time="2025-05-09T00:37:54.192461820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.193571 containerd[1455]: time="2025-05-09T00:37:54.193489387Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 9 00:37:54.194793 containerd[1455]: time="2025-05-09T00:37:54.194750643Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.198055 containerd[1455]: time="2025-05-09T00:37:54.198001841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.199134 containerd[1455]: time="2025-05-09T00:37:54.199099731Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.41681631s" May 9 00:37:54.199134 containerd[1455]: time="2025-05-09T00:37:54.199128475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 9 00:37:56.807898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:56.818531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:56.918616 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... May 9 00:37:56.918635 systemd[1]: Reloading... May 9 00:37:57.033374 zram_generator::config[2056]: No configuration found. May 9 00:37:57.227248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:37:57.306015 systemd[1]: Reloading finished in 386 ms. May 9 00:37:57.360988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:37:57.361085 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:37:57.361372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:57.364029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:57.513395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:57.523608 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:37:57.629369 kubelet[2109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:57.629369 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:37:57.629369 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:57.629797 kubelet[2109]: I0509 00:37:57.629436 2109 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:37:57.901381 kubelet[2109]: I0509 00:37:57.900146 2109 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:37:57.901381 kubelet[2109]: I0509 00:37:57.900190 2109 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:37:57.901381 kubelet[2109]: I0509 00:37:57.900638 2109 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:37:57.926592 kubelet[2109]: I0509 00:37:57.926534 2109 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:37:57.927045 kubelet[2109]: E0509 00:37:57.927001 2109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:57.935820 kubelet[2109]: E0509 00:37:57.935771 2109 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:37:57.935820 kubelet[2109]: I0509 00:37:57.935820 2109 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:37:57.942561 kubelet[2109]: I0509 00:37:57.942513 2109 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:37:57.944076 kubelet[2109]: I0509 00:37:57.944033 2109 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:37:57.944293 kubelet[2109]: I0509 00:37:57.944237 2109 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:37:57.944513 kubelet[2109]: I0509 00:37:57.944279 2109 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:37:57.944704 kubelet[2109]: I0509 00:37:57.944526 2109 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:37:57.944704 kubelet[2109]: I0509 00:37:57.944537 2109 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:37:57.944704 kubelet[2109]: I0509 00:37:57.944698 2109 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:57.946292 kubelet[2109]: I0509 00:37:57.946256 2109 kubelet.go:408] "Attempting to sync node with API server" May 9 00:37:57.946292 kubelet[2109]: I0509 00:37:57.946286 2109 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:37:57.946400 kubelet[2109]: I0509 00:37:57.946365 2109 kubelet.go:314] "Adding apiserver pod source" May 9 00:37:57.946430 kubelet[2109]: I0509 00:37:57.946401 2109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:37:57.952510 kubelet[2109]: W0509 00:37:57.952417 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:57.952510 kubelet[2109]: E0509 00:37:57.952512 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:57.954086 kubelet[2109]: W0509 00:37:57.953956 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:57.954086 kubelet[2109]: E0509 00:37:57.954020 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:57.957269 kubelet[2109]: I0509 00:37:57.957223 2109 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:37:57.959526 kubelet[2109]: I0509 00:37:57.959500 2109 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:37:57.960212 kubelet[2109]: W0509 00:37:57.960175 2109 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:37:57.961156 kubelet[2109]: I0509 00:37:57.961128 2109 server.go:1269] "Started kubelet" May 9 00:37:57.962921 kubelet[2109]: I0509 00:37:57.961422 2109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:37:57.962921 kubelet[2109]: I0509 00:37:57.961790 2109 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:37:57.962921 kubelet[2109]: I0509 00:37:57.962196 2109 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:37:57.962921 kubelet[2109]: I0509 00:37:57.962848 2109 server.go:460] "Adding debug handlers to kubelet server" May 9 00:37:57.962921 kubelet[2109]: I0509 00:37:57.962805 2109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:37:57.964164 kubelet[2109]: I0509 00:37:57.963725 2109 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:37:57.964281 kubelet[2109]: E0509 00:37:57.964259 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:57.964946 kubelet[2109]: I0509 00:37:57.964316 2109 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:37:57.964946 kubelet[2109]: I0509 00:37:57.964614 2109 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:37:57.964946 kubelet[2109]: I0509 00:37:57.964694 2109 reconciler.go:26] "Reconciler: start to sync state" May 9 00:37:57.965800 kubelet[2109]: W0509 00:37:57.965284 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:57.965800 kubelet[2109]: E0509 00:37:57.965363 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:57.965800 kubelet[2109]: E0509 00:37:57.965484 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" May 9 00:37:57.966758 kubelet[2109]: E0509 00:37:57.966727 2109 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:37:57.967023 kubelet[2109]: I0509 00:37:57.966999 2109 factory.go:221] Registration of the containerd container factory successfully May 9 00:37:57.967023 kubelet[2109]: I0509 00:37:57.967020 2109 factory.go:221] Registration of the systemd container factory successfully May 9 00:37:57.967136 kubelet[2109]: I0509 00:37:57.967115 2109 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:37:57.968411 kubelet[2109]: E0509 00:37:57.966533 2109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4d9c8e6e1ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:37:57.961097674 +0000 UTC m=+0.431620438,LastTimestamp:2025-05-09 00:37:57.961097674 +0000 UTC m=+0.431620438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:37:58.096417 kubelet[2109]: E0509 00:37:58.095127 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.097845 kubelet[2109]: I0509 00:37:58.097786 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:37:58.099351 kubelet[2109]: I0509 00:37:58.099286 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:37:58.099462 kubelet[2109]: I0509 00:37:58.099428 2109 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:37:58.099485 kubelet[2109]: I0509 00:37:58.099471 2109 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:37:58.099912 kubelet[2109]: E0509 00:37:58.099525 2109 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:37:58.101092 kubelet[2109]: I0509 00:37:58.101077 2109 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:37:58.101245 kubelet[2109]: I0509 00:37:58.101177 2109 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:37:58.101245 kubelet[2109]: I0509 00:37:58.101205 2109 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:58.101987 kubelet[2109]: W0509 00:37:58.101942 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:58.102171 kubelet[2109]: E0509 00:37:58.102130 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:58.166931 kubelet[2109]: E0509 00:37:58.166811 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" May 9 00:37:58.195978 kubelet[2109]: E0509 00:37:58.195908 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.200137 kubelet[2109]: E0509 00:37:58.200102 2109 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:37:58.296733 kubelet[2109]: E0509 00:37:58.296642 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.397199 kubelet[2109]: E0509 00:37:58.397136 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.400209 kubelet[2109]: E0509 00:37:58.400177 2109 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:37:58.498270 kubelet[2109]: E0509 00:37:58.498134 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.568069 kubelet[2109]: E0509 00:37:58.567980 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" May 9 00:37:58.599000 kubelet[2109]: E0509 00:37:58.598961 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.668365 kubelet[2109]: I0509 00:37:58.668274 2109 policy_none.go:49] "None policy: Start" May 9 00:37:58.669151 kubelet[2109]: I0509 00:37:58.669131 2109 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:37:58.669184 kubelet[2109]: I0509 00:37:58.669163 2109 state_mem.go:35] "Initializing new in-memory state store" May 9 00:37:58.699473 kubelet[2109]: E0509 00:37:58.699412 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:58.723319 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:37:58.738653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:37:58.741740 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:37:58.753508 kubelet[2109]: I0509 00:37:58.752316 2109 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:37:58.753508 kubelet[2109]: I0509 00:37:58.752616 2109 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:37:58.753508 kubelet[2109]: I0509 00:37:58.752635 2109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:37:58.753508 kubelet[2109]: I0509 00:37:58.753103 2109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:37:58.758749 kubelet[2109]: E0509 00:37:58.756863 2109 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:37:58.809483 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 9 00:37:58.835165 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 9 00:37:58.846956 systemd[1]: Created slice kubepods-burstable-podf14b3b3c26f30f4b9b7336cf6959c992.slice - libcontainer container kubepods-burstable-podf14b3b3c26f30f4b9b7336cf6959c992.slice. May 9 00:37:58.854728 kubelet[2109]: I0509 00:37:58.854699 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:37:58.855145 kubelet[2109]: E0509 00:37:58.855107 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" May 9 00:37:58.900899 kubelet[2109]: I0509 00:37:58.900835 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:58.900899 kubelet[2109]: I0509 00:37:58.900890 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:58.900899 kubelet[2109]: I0509 00:37:58.900919 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:58.901102 kubelet[2109]: I0509 00:37:58.900944 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:58.901102 kubelet[2109]: I0509 00:37:58.900969 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:37:58.901102 kubelet[2109]: I0509 00:37:58.900984 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:58.901102 kubelet[2109]: I0509 00:37:58.901009 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:58.901102 kubelet[2109]: I0509 00:37:58.901031 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:58.901216 kubelet[2109]: I0509 00:37:58.901046 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:59.015152 kubelet[2109]: W0509 00:37:59.014969 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:59.015152 kubelet[2109]: E0509 00:37:59.015051 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:59.056351 kubelet[2109]: I0509 00:37:59.056306 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:37:59.056717 kubelet[2109]: E0509 00:37:59.056685 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" May 9 00:37:59.133303 kubelet[2109]: E0509 00:37:59.133273 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.133868 containerd[1455]: time="2025-05-09T00:37:59.133826187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 9 00:37:59.146022 kubelet[2109]: E0509 00:37:59.145988 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.146438 containerd[1455]: time="2025-05-09T00:37:59.146394842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 9 00:37:59.149649 kubelet[2109]: E0509 00:37:59.149618 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.150244 containerd[1455]: time="2025-05-09T00:37:59.150100643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f14b3b3c26f30f4b9b7336cf6959c992,Namespace:kube-system,Attempt:0,}" May 9 00:37:59.368955 kubelet[2109]: E0509 00:37:59.368885 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" May 9 00:37:59.389296 kubelet[2109]: W0509 00:37:59.389231 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:59.389296 kubelet[2109]: E0509 00:37:59.389296 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:59.458249 kubelet[2109]: I0509 00:37:59.458215 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:37:59.458686 kubelet[2109]: E0509 00:37:59.458623 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" May 9 00:37:59.541787 kubelet[2109]: W0509 00:37:59.541708 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:59.541883 kubelet[2109]: E0509 00:37:59.541794 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:59.596949 kubelet[2109]: W0509 00:37:59.596873 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused May 9 00:37:59.597075 kubelet[2109]: E0509 00:37:59.596960 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:37:59.683836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257269298.mount: Deactivated successfully. May 9 00:38:00.034457 kubelet[2109]: E0509 00:38:00.034311 2109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" May 9 00:38:00.259996 kubelet[2109]: I0509 00:38:00.259962 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:38:00.260444 kubelet[2109]: E0509 00:38:00.260396 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" May 9 00:38:00.318413 containerd[1455]: time="2025-05-09T00:38:00.318355345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:38:00.319465 containerd[1455]: time="2025-05-09T00:38:00.319429480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:38:00.320531 containerd[1455]: time="2025-05-09T00:38:00.320462287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:38:00.321394 containerd[1455]: time="2025-05-09T00:38:00.321363878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:38:00.322385 containerd[1455]: time="2025-05-09T00:38:00.322326103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:38:00.323281 containerd[1455]: time="2025-05-09T00:38:00.323234357Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:38:00.324258 containerd[1455]: time="2025-05-09T00:38:00.324192674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:38:00.327163 containerd[1455]: time="2025-05-09T00:38:00.327125786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:38:00.329058 containerd[1455]: time="2025-05-09T00:38:00.329012204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.195097191s" May 9 00:38:00.329908 containerd[1455]: time="2025-05-09T00:38:00.329857730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.179682577s" May 9 00:38:00.330589 containerd[1455]: time="2025-05-09T00:38:00.330558695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.18408204s" May 9 00:38:00.546155 containerd[1455]: time="2025-05-09T00:38:00.545127625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:00.546155 containerd[1455]: time="2025-05-09T00:38:00.545912638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:00.546155 containerd[1455]: time="2025-05-09T00:38:00.545923959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.546155 containerd[1455]: time="2025-05-09T00:38:00.546019248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546883829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546943321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546669177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546726544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546749447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.546990 containerd[1455]: time="2025-05-09T00:38:00.546866036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.547135 containerd[1455]: time="2025-05-09T00:38:00.546958289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.547135 containerd[1455]: time="2025-05-09T00:38:00.547028090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:00.583549 systemd[1]: Started cri-containerd-c2fb417aae9061eb0ca49a5736b67f952cc108b104f344595cd11db9ebf91c80.scope - libcontainer container c2fb417aae9061eb0ca49a5736b67f952cc108b104f344595cd11db9ebf91c80. May 9 00:38:00.588224 systemd[1]: Started cri-containerd-be4e0803fa3e9bc39f00b43e89249f97d782861b4db6f77485764db79cd852d8.scope - libcontainer container be4e0803fa3e9bc39f00b43e89249f97d782861b4db6f77485764db79cd852d8. May 9 00:38:00.590233 systemd[1]: Started cri-containerd-d37603d43cc01ec6ee637b6696f7afad4dd4a0f78c93e99a50afb6b78f5e043f.scope - libcontainer container d37603d43cc01ec6ee637b6696f7afad4dd4a0f78c93e99a50afb6b78f5e043f. May 9 00:38:00.745524 containerd[1455]: time="2025-05-09T00:38:00.745046426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f14b3b3c26f30f4b9b7336cf6959c992,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2fb417aae9061eb0ca49a5736b67f952cc108b104f344595cd11db9ebf91c80\"" May 9 00:38:00.747272 containerd[1455]: time="2025-05-09T00:38:00.747174658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4e0803fa3e9bc39f00b43e89249f97d782861b4db6f77485764db79cd852d8\"" May 9 00:38:00.749647 kubelet[2109]: E0509 00:38:00.749129 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:00.749647 kubelet[2109]: E0509 00:38:00.749233 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:00.752919 containerd[1455]: time="2025-05-09T00:38:00.752871012Z" level=info msg="CreateContainer within sandbox \"be4e0803fa3e9bc39f00b43e89249f97d782861b4db6f77485764db79cd852d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:38:00.753186 containerd[1455]: time="2025-05-09T00:38:00.753150567Z" level=info msg="CreateContainer within sandbox \"c2fb417aae9061eb0ca49a5736b67f952cc108b104f344595cd11db9ebf91c80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:38:00.753723 containerd[1455]: time="2025-05-09T00:38:00.753691381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d37603d43cc01ec6ee637b6696f7afad4dd4a0f78c93e99a50afb6b78f5e043f\"" May 9 00:38:00.754791 kubelet[2109]: E0509 00:38:00.754759 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:00.756412 containerd[1455]: time="2025-05-09T00:38:00.756374624Z" level=info msg="CreateContainer within sandbox \"d37603d43cc01ec6ee637b6696f7afad4dd4a0f78c93e99a50afb6b78f5e043f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:38:00.785480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322230050.mount: Deactivated successfully. May 9 00:38:00.786981 containerd[1455]: time="2025-05-09T00:38:00.786919267Z" level=info msg="CreateContainer within sandbox \"be4e0803fa3e9bc39f00b43e89249f97d782861b4db6f77485764db79cd852d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a960ba7836826fe4522e421906254001d46ef09606efda0782fe32e800a4c0b\"" May 9 00:38:00.788020 containerd[1455]: time="2025-05-09T00:38:00.787973304Z" level=info msg="StartContainer for \"1a960ba7836826fe4522e421906254001d46ef09606efda0782fe32e800a4c0b\"" May 9 00:38:00.793464 containerd[1455]: time="2025-05-09T00:38:00.793401857Z" level=info msg="CreateContainer within sandbox \"c2fb417aae9061eb0ca49a5736b67f952cc108b104f344595cd11db9ebf91c80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ddda9874f4c268e1335e634c1e76154903b6b7141ae435c999928f711cea11b8\"" May 9 00:38:00.794226 containerd[1455]: time="2025-05-09T00:38:00.794154368Z" level=info msg="StartContainer for \"ddda9874f4c268e1335e634c1e76154903b6b7141ae435c999928f711cea11b8\"" May 9 00:38:00.797803 containerd[1455]: time="2025-05-09T00:38:00.797584432Z" level=info msg="CreateContainer within sandbox \"d37603d43cc01ec6ee637b6696f7afad4dd4a0f78c93e99a50afb6b78f5e043f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e3883b673d85611d5c0385821a968c9d6caf44ac028882272ece75122734e4d\"" May 9 00:38:00.799151 containerd[1455]: time="2025-05-09T00:38:00.799031757Z" level=info msg="StartContainer for \"5e3883b673d85611d5c0385821a968c9d6caf44ac028882272ece75122734e4d\"" May 9 00:38:00.827772 systemd[1]: Started cri-containerd-ddda9874f4c268e1335e634c1e76154903b6b7141ae435c999928f711cea11b8.scope - libcontainer container ddda9874f4c268e1335e634c1e76154903b6b7141ae435c999928f711cea11b8. May 9 00:38:00.832392 systemd[1]: Started cri-containerd-1a960ba7836826fe4522e421906254001d46ef09606efda0782fe32e800a4c0b.scope - libcontainer container 1a960ba7836826fe4522e421906254001d46ef09606efda0782fe32e800a4c0b. May 9 00:38:00.835084 systemd[1]: Started cri-containerd-5e3883b673d85611d5c0385821a968c9d6caf44ac028882272ece75122734e4d.scope - libcontainer container 5e3883b673d85611d5c0385821a968c9d6caf44ac028882272ece75122734e4d. May 9 00:38:00.878906 containerd[1455]: time="2025-05-09T00:38:00.878856155Z" level=info msg="StartContainer for \"ddda9874f4c268e1335e634c1e76154903b6b7141ae435c999928f711cea11b8\" returns successfully" May 9 00:38:00.885669 containerd[1455]: time="2025-05-09T00:38:00.885631944Z" level=info msg="StartContainer for \"5e3883b673d85611d5c0385821a968c9d6caf44ac028882272ece75122734e4d\" returns successfully" May 9 00:38:00.886018 containerd[1455]: time="2025-05-09T00:38:00.885816089Z" level=info msg="StartContainer for \"1a960ba7836826fe4522e421906254001d46ef09606efda0782fe32e800a4c0b\" returns successfully" May 9 00:38:01.112870 kubelet[2109]: E0509 00:38:01.112373 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:01.125424 kubelet[2109]: E0509 00:38:01.123029 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:01.126598 kubelet[2109]: E0509 00:38:01.126577 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:01.862019 kubelet[2109]: I0509 00:38:01.861978 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:38:02.129760 kubelet[2109]: E0509 00:38:02.129631 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:02.555167 kubelet[2109]: E0509 00:38:02.554375 2109 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:38:02.655945 kubelet[2109]: I0509 00:38:02.655895 2109 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:38:02.655945 kubelet[2109]: E0509 00:38:02.655931 2109 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 00:38:02.972752 kubelet[2109]: I0509 00:38:02.972603 2109 apiserver.go:52] "Watching apiserver" May 9 00:38:03.065700 kubelet[2109]: I0509 00:38:03.065652 2109 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:38:05.047123 systemd[1]: Reloading requested from client PID 2386 ('systemctl') (unit session-7.scope)... May 9 00:38:05.047144 systemd[1]: Reloading... May 9 00:38:05.125370 zram_generator::config[2426]: No configuration found. May 9 00:38:05.236898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:38:05.327392 systemd[1]: Reloading finished in 279 ms. May 9 00:38:05.374419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:38:05.374614 kubelet[2109]: I0509 00:38:05.374316 2109 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:38:05.383258 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:38:05.383570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:38:05.383643 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 121.9M memory peak, 0B memory swap peak. May 9 00:38:05.393539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:38:05.539114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:38:05.545084 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:38:05.613946 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:38:05.613946 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:38:05.613946 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:38:05.613946 kubelet[2470]: I0509 00:38:05.613700 2470 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:38:05.623052 kubelet[2470]: I0509 00:38:05.623009 2470 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:38:05.623052 kubelet[2470]: I0509 00:38:05.623036 2470 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:38:05.623294 kubelet[2470]: I0509 00:38:05.623261 2470 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:38:05.624577 kubelet[2470]: I0509 00:38:05.624554 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:38:05.626408 kubelet[2470]: I0509 00:38:05.626362 2470 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:38:05.629300 kubelet[2470]: E0509 00:38:05.629251 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:38:05.629300 kubelet[2470]: I0509 00:38:05.629279 2470 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:38:05.633572 kubelet[2470]: I0509 00:38:05.633548 2470 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:38:05.633716 kubelet[2470]: I0509 00:38:05.633688 2470 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:38:05.633872 kubelet[2470]: I0509 00:38:05.633829 2470 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:38:05.634032 kubelet[2470]: I0509 00:38:05.633863 2470 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:38:05.634112 kubelet[2470]: I0509 00:38:05.634038 2470 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:38:05.634112 kubelet[2470]: I0509 00:38:05.634048 2470 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:38:05.634112 kubelet[2470]: I0509 00:38:05.634086 2470 state_mem.go:36] "Initialized new in-memory state store" May 9 00:38:05.634223 kubelet[2470]: I0509 00:38:05.634205 2470 kubelet.go:408] "Attempting to sync node with API server" May 9 00:38:05.634223 kubelet[2470]: I0509 00:38:05.634220 2470 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:38:05.634291 kubelet[2470]: I0509 00:38:05.634255 2470 kubelet.go:314] "Adding apiserver pod source" May 9 00:38:05.634291 kubelet[2470]: I0509 00:38:05.634271 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:38:05.635644 kubelet[2470]: I0509 00:38:05.635235 2470 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:38:05.635929 kubelet[2470]: I0509 00:38:05.635903 2470 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:38:05.636632 kubelet[2470]: I0509 00:38:05.636608 2470 server.go:1269] "Started kubelet" May 9 00:38:05.638361 kubelet[2470]: I0509 00:38:05.638270 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:38:05.640253 kubelet[2470]: I0509 00:38:05.639721 2470 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:38:05.641184 kubelet[2470]: I0509 00:38:05.641167 2470 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:38:05.641898 kubelet[2470]: I0509 00:38:05.641879 2470 server.go:460] "Adding debug handlers to kubelet server" May 9 00:38:05.644076 kubelet[2470]: I0509 00:38:05.644028 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:38:05.645008 kubelet[2470]: E0509 00:38:05.644979 2470 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:38:05.646062 kubelet[2470]: I0509 00:38:05.646040 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:38:05.647592 kubelet[2470]: I0509 00:38:05.647480 2470 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:38:05.647592 kubelet[2470]: I0509 00:38:05.647561 2470 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:38:05.647714 kubelet[2470]: I0509 00:38:05.647672 2470 reconciler.go:26] "Reconciler: start to sync state" May 9 00:38:05.649023 kubelet[2470]: I0509 00:38:05.649000 2470 factory.go:221] Registration of the systemd container factory successfully May 9 00:38:05.649126 kubelet[2470]: I0509 00:38:05.649102 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:38:05.650522 kubelet[2470]: I0509 00:38:05.650492 2470 factory.go:221] Registration of the containerd container factory successfully May 9 00:38:05.656045 kubelet[2470]: I0509 00:38:05.656000 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:38:05.657182 kubelet[2470]: I0509 00:38:05.657161 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:38:05.657218 kubelet[2470]: I0509 00:38:05.657196 2470 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:38:05.657218 kubelet[2470]: I0509 00:38:05.657215 2470 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:38:05.657278 kubelet[2470]: E0509 00:38:05.657253 2470 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:38:05.686238 kubelet[2470]: I0509 00:38:05.686212 2470 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:38:05.686238 kubelet[2470]: I0509 00:38:05.686229 2470 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:38:05.686238 kubelet[2470]: I0509 00:38:05.686247 2470 state_mem.go:36] "Initialized new in-memory state store" May 9 00:38:05.686429 kubelet[2470]: I0509 00:38:05.686389 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:38:05.686429 kubelet[2470]: I0509 00:38:05.686399 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:38:05.686429 kubelet[2470]: I0509 00:38:05.686416 2470 policy_none.go:49] "None policy: Start" May 9 00:38:05.686931 kubelet[2470]: I0509 00:38:05.686888 2470 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:38:05.686931 kubelet[2470]: I0509 00:38:05.686908 2470 state_mem.go:35] "Initializing new in-memory state store" May 9 00:38:05.687074 kubelet[2470]: I0509 00:38:05.687062 2470 state_mem.go:75] "Updated machine memory state" May 9 00:38:05.691723 kubelet[2470]: I0509 00:38:05.691584 2470 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:38:05.691844 kubelet[2470]: I0509 00:38:05.691818 2470 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:38:05.691873 kubelet[2470]: I0509 00:38:05.691837 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:38:05.692136 kubelet[2470]: I0509 00:38:05.692049 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:38:05.799571 kubelet[2470]: I0509 00:38:05.799528 2470 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:38:05.805211 kubelet[2470]: I0509 00:38:05.805172 2470 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 9 00:38:05.805356 kubelet[2470]: I0509 00:38:05.805254 2470 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:38:05.948424 kubelet[2470]: I0509 00:38:05.948293 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:38:05.948424 kubelet[2470]: I0509 00:38:05.948348 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:38:05.948424 kubelet[2470]: I0509 00:38:05.948372 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:38:05.948675 kubelet[2470]: I0509 00:38:05.948488 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:38:05.948675 kubelet[2470]: I0509 00:38:05.948540 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:38:05.948675 kubelet[2470]: I0509 00:38:05.948562 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f14b3b3c26f30f4b9b7336cf6959c992-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f14b3b3c26f30f4b9b7336cf6959c992\") " pod="kube-system/kube-apiserver-localhost" May 9 00:38:05.948675 kubelet[2470]: I0509 00:38:05.948577 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:38:05.948675 kubelet[2470]: I0509 00:38:05.948599 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:38:05.948796 kubelet[2470]: I0509 00:38:05.948618 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:38:06.065891 kubelet[2470]: E0509 00:38:06.065851 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.065891 kubelet[2470]: E0509 00:38:06.065904 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.066919 kubelet[2470]: E0509 00:38:06.066891 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.076786 sudo[2505]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:38:06.077240 sudo[2505]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:38:06.635867 kubelet[2470]: I0509 00:38:06.635825 2470 apiserver.go:52] "Watching apiserver" May 9 00:38:06.647863 kubelet[2470]: I0509 00:38:06.647803 2470 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:38:06.673359 kubelet[2470]: E0509 00:38:06.673284 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.676390 kubelet[2470]: E0509 00:38:06.675352 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.679246 kubelet[2470]: E0509 00:38:06.679196 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:38:06.679636 kubelet[2470]: E0509 00:38:06.679613 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:06.696178 kubelet[2470]: I0509 00:38:06.696107 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.696080601 podStartE2EDuration="1.696080601s" podCreationTimestamp="2025-05-09 00:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:06.695379533 +0000 UTC m=+1.143241669" watchObservedRunningTime="2025-05-09 00:38:06.696080601 +0000 UTC m=+1.143942747" May 9 00:38:06.699441 sudo[2505]: pam_unix(sudo:session): session closed for user root May 9 00:38:06.701660 kubelet[2470]: I0509 00:38:06.701351 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.701319485 podStartE2EDuration="1.701319485s" podCreationTimestamp="2025-05-09 00:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:06.701176912 +0000 UTC m=+1.149039048" watchObservedRunningTime="2025-05-09 00:38:06.701319485 +0000 UTC m=+1.149181621" May 9 00:38:06.714516 kubelet[2470]: I0509 00:38:06.714407 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.714381515 podStartE2EDuration="1.714381515s" podCreationTimestamp="2025-05-09 00:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:06.707869592 +0000 UTC m=+1.155731728" watchObservedRunningTime="2025-05-09 00:38:06.714381515 +0000 UTC m=+1.162243651" May 9 00:38:07.674924 kubelet[2470]: E0509 00:38:07.674876 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:08.213066 sudo[1635]: pam_unix(sudo:session): session closed for user root May 9 00:38:08.215565 sshd[1632]: pam_unix(sshd:session): session closed for user core May 9 00:38:08.220047 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:34402.service: Deactivated successfully. May 9 00:38:08.222229 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:38:08.222441 systemd[1]: session-7.scope: Consumed 5.506s CPU time, 158.7M memory peak, 0B memory swap peak. May 9 00:38:08.222867 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. May 9 00:38:08.223895 systemd-logind[1442]: Removed session 7. May 9 00:38:10.521327 kubelet[2470]: I0509 00:38:10.521281 2470 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:38:10.521768 containerd[1455]: time="2025-05-09T00:38:10.521676711Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:38:10.522090 kubelet[2470]: I0509 00:38:10.521858 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:38:11.912811 systemd[1]: Created slice kubepods-besteffort-pod5ef2e583_e5f9_4519_8092_fa0c10cf6731.slice - libcontainer container kubepods-besteffort-pod5ef2e583_e5f9_4519_8092_fa0c10cf6731.slice. May 9 00:38:11.925052 systemd[1]: Created slice kubepods-burstable-pod54deb056_bbd1_4f23_ade0_34d13e893f9f.slice - libcontainer container kubepods-burstable-pod54deb056_bbd1_4f23_ade0_34d13e893f9f.slice. May 9 00:38:11.934112 systemd[1]: Created slice kubepods-besteffort-pod30e08718_53c0_4365_98d7_b74ca9fe035a.slice - libcontainer container kubepods-besteffort-pod30e08718_53c0_4365_98d7_b74ca9fe035a.slice. May 9 00:38:12.007056 kubelet[2470]: I0509 00:38:12.006995 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ef2e583-e5f9-4519-8092-fa0c10cf6731-kube-proxy\") pod \"kube-proxy-5p7z6\" (UID: \"5ef2e583-e5f9-4519-8092-fa0c10cf6731\") " pod="kube-system/kube-proxy-5p7z6" May 9 00:38:12.007056 kubelet[2470]: I0509 00:38:12.007041 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-net\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007056 kubelet[2470]: I0509 00:38:12.007065 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-hubble-tls\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007768 kubelet[2470]: I0509 00:38:12.007087 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-lib-modules\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007768 kubelet[2470]: I0509 00:38:12.007123 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-xtables-lock\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007768 kubelet[2470]: I0509 00:38:12.007180 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-config-path\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007768 kubelet[2470]: I0509 00:38:12.007226 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54deb056-bbd1-4f23-ade0-34d13e893f9f-clustermesh-secrets\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.007768 kubelet[2470]: I0509 00:38:12.007266 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrw2\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-kube-api-access-nzrw2\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007315 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ef2e583-e5f9-4519-8092-fa0c10cf6731-lib-modules\") pod \"kube-proxy-5p7z6\" (UID: \"5ef2e583-e5f9-4519-8092-fa0c10cf6731\") " pod="kube-system/kube-proxy-5p7z6" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007373 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hk4\" (UniqueName: \"kubernetes.io/projected/5ef2e583-e5f9-4519-8092-fa0c10cf6731-kube-api-access-v6hk4\") pod \"kube-proxy-5p7z6\" (UID: \"5ef2e583-e5f9-4519-8092-fa0c10cf6731\") " pod="kube-system/kube-proxy-5p7z6" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007418 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-bpf-maps\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007437 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-cgroup\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007450 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-etc-cni-netd\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008026 kubelet[2470]: I0509 00:38:12.007468 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef2e583-e5f9-4519-8092-fa0c10cf6731-xtables-lock\") pod \"kube-proxy-5p7z6\" (UID: \"5ef2e583-e5f9-4519-8092-fa0c10cf6731\") " pod="kube-system/kube-proxy-5p7z6" May 9 00:38:12.008313 kubelet[2470]: I0509 00:38:12.007485 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-run\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008313 kubelet[2470]: I0509 00:38:12.007506 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-hostproc\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008313 kubelet[2470]: I0509 00:38:12.007528 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cni-path\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.008313 kubelet[2470]: I0509 00:38:12.007549 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-kernel\") pod \"cilium-tj2ph\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " pod="kube-system/cilium-tj2ph" May 9 00:38:12.109589 kubelet[2470]: I0509 00:38:12.109491 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mshh\" (UniqueName: \"kubernetes.io/projected/30e08718-53c0-4365-98d7-b74ca9fe035a-kube-api-access-8mshh\") pod \"cilium-operator-5d85765b45-f6vkh\" (UID: \"30e08718-53c0-4365-98d7-b74ca9fe035a\") " pod="kube-system/cilium-operator-5d85765b45-f6vkh" May 9 00:38:12.109909 kubelet[2470]: I0509 00:38:12.109618 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30e08718-53c0-4365-98d7-b74ca9fe035a-cilium-config-path\") pod \"cilium-operator-5d85765b45-f6vkh\" (UID: \"30e08718-53c0-4365-98d7-b74ca9fe035a\") " pod="kube-system/cilium-operator-5d85765b45-f6vkh" May 9 00:38:12.224985 kubelet[2470]: E0509 00:38:12.224870 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.225583 containerd[1455]: time="2025-05-09T00:38:12.225529505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p7z6,Uid:5ef2e583-e5f9-4519-8092-fa0c10cf6731,Namespace:kube-system,Attempt:0,}" May 9 00:38:12.229157 kubelet[2470]: E0509 00:38:12.229126 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.229568 containerd[1455]: time="2025-05-09T00:38:12.229511171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tj2ph,Uid:54deb056-bbd1-4f23-ade0-34d13e893f9f,Namespace:kube-system,Attempt:0,}" May 9 00:38:12.236623 kubelet[2470]: E0509 00:38:12.236572 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.237068 containerd[1455]: time="2025-05-09T00:38:12.237030357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f6vkh,Uid:30e08718-53c0-4365-98d7-b74ca9fe035a,Namespace:kube-system,Attempt:0,}" May 9 00:38:12.260513 containerd[1455]: time="2025-05-09T00:38:12.260353533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:12.260513 containerd[1455]: time="2025-05-09T00:38:12.260462622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:12.260983 containerd[1455]: time="2025-05-09T00:38:12.260742776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.261043 containerd[1455]: time="2025-05-09T00:38:12.260965060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.269924 containerd[1455]: time="2025-05-09T00:38:12.269565509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:12.269924 containerd[1455]: time="2025-05-09T00:38:12.269688463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:12.269924 containerd[1455]: time="2025-05-09T00:38:12.269708592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.269924 containerd[1455]: time="2025-05-09T00:38:12.269830423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.282943 containerd[1455]: time="2025-05-09T00:38:12.282786962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:12.282943 containerd[1455]: time="2025-05-09T00:38:12.282854743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:12.283674 containerd[1455]: time="2025-05-09T00:38:12.283607649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.284813 containerd[1455]: time="2025-05-09T00:38:12.283713100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:12.297742 systemd[1]: Started cri-containerd-57391af2575d97631739d0a34cf5e1af44f83a5b31ffc37dbce559c8cabfd7ab.scope - libcontainer container 57391af2575d97631739d0a34cf5e1af44f83a5b31ffc37dbce559c8cabfd7ab. May 9 00:38:12.303371 systemd[1]: Started cri-containerd-af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0.scope - libcontainer container af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0. May 9 00:38:12.305807 systemd[1]: Started cri-containerd-bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e.scope - libcontainer container bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e. May 9 00:38:12.339592 containerd[1455]: time="2025-05-09T00:38:12.339495554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tj2ph,Uid:54deb056-bbd1-4f23-ade0-34d13e893f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\"" May 9 00:38:12.340401 kubelet[2470]: E0509 00:38:12.340367 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.341638 containerd[1455]: time="2025-05-09T00:38:12.341378397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p7z6,Uid:5ef2e583-e5f9-4519-8092-fa0c10cf6731,Namespace:kube-system,Attempt:0,} returns sandbox id \"57391af2575d97631739d0a34cf5e1af44f83a5b31ffc37dbce559c8cabfd7ab\"" May 9 00:38:12.345700 kubelet[2470]: E0509 00:38:12.345663 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.348273 containerd[1455]: time="2025-05-09T00:38:12.348215551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:38:12.352391 containerd[1455]: time="2025-05-09T00:38:12.352348987Z" level=info msg="CreateContainer within sandbox \"57391af2575d97631739d0a34cf5e1af44f83a5b31ffc37dbce559c8cabfd7ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:38:12.362875 containerd[1455]: time="2025-05-09T00:38:12.362813972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f6vkh,Uid:30e08718-53c0-4365-98d7-b74ca9fe035a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\"" May 9 00:38:12.363755 kubelet[2470]: E0509 00:38:12.363722 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.376468 containerd[1455]: time="2025-05-09T00:38:12.376351289Z" level=info msg="CreateContainer within sandbox \"57391af2575d97631739d0a34cf5e1af44f83a5b31ffc37dbce559c8cabfd7ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99c93142be9ce562d3eb19bff33f3207845ec53c90d1678035d3707af209761d\"" May 9 00:38:12.377128 containerd[1455]: time="2025-05-09T00:38:12.377090149Z" level=info msg="StartContainer for \"99c93142be9ce562d3eb19bff33f3207845ec53c90d1678035d3707af209761d\"" May 9 00:38:12.405897 systemd[1]: Started cri-containerd-99c93142be9ce562d3eb19bff33f3207845ec53c90d1678035d3707af209761d.scope - libcontainer container 99c93142be9ce562d3eb19bff33f3207845ec53c90d1678035d3707af209761d. May 9 00:38:12.443393 containerd[1455]: time="2025-05-09T00:38:12.443351645Z" level=info msg="StartContainer for \"99c93142be9ce562d3eb19bff33f3207845ec53c90d1678035d3707af209761d\" returns successfully" May 9 00:38:12.913969 kubelet[2470]: E0509 00:38:12.913933 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:12.923134 kubelet[2470]: I0509 00:38:12.922654 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5p7z6" podStartSLOduration=1.922634252 podStartE2EDuration="1.922634252s" podCreationTimestamp="2025-05-09 00:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:12.92220347 +0000 UTC m=+7.370065606" watchObservedRunningTime="2025-05-09 00:38:12.922634252 +0000 UTC m=+7.370496388" May 9 00:38:14.643948 kubelet[2470]: E0509 00:38:14.643742 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:14.917643 kubelet[2470]: E0509 00:38:14.917126 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:15.096123 kubelet[2470]: E0509 00:38:15.096047 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:15.624587 kubelet[2470]: E0509 00:38:15.624523 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:15.919213 kubelet[2470]: E0509 00:38:15.919092 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:15.919213 kubelet[2470]: E0509 00:38:15.919096 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:17.960233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171580433.mount: Deactivated successfully. May 9 00:38:18.338768 update_engine[1443]: I20250509 00:38:18.338675 1443 update_attempter.cc:509] Updating boot flags... May 9 00:38:18.476365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2855) May 9 00:38:18.517364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2849) May 9 00:38:25.878261 containerd[1455]: time="2025-05-09T00:38:25.878189113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:38:25.878910 containerd[1455]: time="2025-05-09T00:38:25.878838180Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:38:25.879975 containerd[1455]: time="2025-05-09T00:38:25.879941275Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:38:25.881631 containerd[1455]: time="2025-05-09T00:38:25.881599188Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.533339542s" May 9 00:38:25.881681 containerd[1455]: time="2025-05-09T00:38:25.881631779Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:38:25.890892 containerd[1455]: time="2025-05-09T00:38:25.890862320Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:38:25.907566 containerd[1455]: time="2025-05-09T00:38:25.907524085Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:38:26.059885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270076792.mount: Deactivated successfully. May 9 00:38:26.063867 containerd[1455]: time="2025-05-09T00:38:26.063821639Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\"" May 9 00:38:26.066771 containerd[1455]: time="2025-05-09T00:38:26.066731044Z" level=info msg="StartContainer for \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\"" May 9 00:38:26.103487 systemd[1]: Started cri-containerd-93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344.scope - libcontainer container 93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344. May 9 00:38:26.129160 containerd[1455]: time="2025-05-09T00:38:26.129033127Z" level=info msg="StartContainer for \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\" returns successfully" May 9 00:38:26.139700 systemd[1]: cri-containerd-93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344.scope: Deactivated successfully. May 9 00:38:26.552639 containerd[1455]: time="2025-05-09T00:38:26.552557139Z" level=info msg="shim disconnected" id=93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344 namespace=k8s.io May 9 00:38:26.552896 containerd[1455]: time="2025-05-09T00:38:26.552641849Z" level=warning msg="cleaning up after shim disconnected" id=93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344 namespace=k8s.io May 9 00:38:26.552896 containerd[1455]: time="2025-05-09T00:38:26.552657108Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:26.942344 kubelet[2470]: E0509 00:38:26.942191 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:26.944916 containerd[1455]: time="2025-05-09T00:38:26.944865324Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:38:26.969782 containerd[1455]: time="2025-05-09T00:38:26.969732963Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\"" May 9 00:38:26.970539 containerd[1455]: time="2025-05-09T00:38:26.970475144Z" level=info msg="StartContainer for \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\"" May 9 00:38:26.998512 systemd[1]: Started cri-containerd-e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9.scope - libcontainer container e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9. May 9 00:38:27.024137 containerd[1455]: time="2025-05-09T00:38:27.024087554Z" level=info msg="StartContainer for \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\" returns successfully" May 9 00:38:27.036738 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:38:27.037001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:38:27.037163 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:38:27.043708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:38:27.044114 systemd[1]: cri-containerd-e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9.scope: Deactivated successfully. May 9 00:38:27.058870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344-rootfs.mount: Deactivated successfully. May 9 00:38:27.062416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9-rootfs.mount: Deactivated successfully. May 9 00:38:27.067939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:38:27.068163 containerd[1455]: time="2025-05-09T00:38:27.067977124Z" level=info msg="shim disconnected" id=e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9 namespace=k8s.io May 9 00:38:27.068163 containerd[1455]: time="2025-05-09T00:38:27.068070831Z" level=warning msg="cleaning up after shim disconnected" id=e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9 namespace=k8s.io May 9 00:38:27.068163 containerd[1455]: time="2025-05-09T00:38:27.068080159Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:27.946981 kubelet[2470]: E0509 00:38:27.946743 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:27.951517 containerd[1455]: time="2025-05-09T00:38:27.951475557Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:38:27.972216 containerd[1455]: time="2025-05-09T00:38:27.972161821Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\"" May 9 00:38:27.972702 containerd[1455]: time="2025-05-09T00:38:27.972653148Z" level=info msg="StartContainer for \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\"" May 9 00:38:27.999533 systemd[1]: Started cri-containerd-1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b.scope - libcontainer container 1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b. May 9 00:38:28.027832 containerd[1455]: time="2025-05-09T00:38:28.027791915Z" level=info msg="StartContainer for \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\" returns successfully" May 9 00:38:28.029823 systemd[1]: cri-containerd-1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b.scope: Deactivated successfully. May 9 00:38:28.054222 containerd[1455]: time="2025-05-09T00:38:28.054155737Z" level=info msg="shim disconnected" id=1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b namespace=k8s.io May 9 00:38:28.054222 containerd[1455]: time="2025-05-09T00:38:28.054221061Z" level=warning msg="cleaning up after shim disconnected" id=1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b namespace=k8s.io May 9 00:38:28.054486 containerd[1455]: time="2025-05-09T00:38:28.054231851Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:28.057568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b-rootfs.mount: Deactivated successfully. May 9 00:38:28.353634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403086325.mount: Deactivated successfully. May 9 00:38:28.948086 kubelet[2470]: E0509 00:38:28.948052 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:28.950586 containerd[1455]: time="2025-05-09T00:38:28.950549020Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:38:29.096753 containerd[1455]: time="2025-05-09T00:38:29.096699993Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\"" May 9 00:38:29.097406 containerd[1455]: time="2025-05-09T00:38:29.097361601Z" level=info msg="StartContainer for \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\"" May 9 00:38:29.101189 containerd[1455]: time="2025-05-09T00:38:29.101153063Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:38:29.102152 containerd[1455]: time="2025-05-09T00:38:29.102109628Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:38:29.103358 containerd[1455]: time="2025-05-09T00:38:29.103310774Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:38:29.105992 containerd[1455]: time="2025-05-09T00:38:29.105938480Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.215040552s" May 9 00:38:29.105992 containerd[1455]: time="2025-05-09T00:38:29.105981642Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:38:29.109838 containerd[1455]: time="2025-05-09T00:38:29.109398768Z" level=info msg="CreateContainer within sandbox \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:38:29.132474 systemd[1]: Started cri-containerd-124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc.scope - libcontainer container 124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc. May 9 00:38:29.159448 systemd[1]: cri-containerd-124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc.scope: Deactivated successfully. May 9 00:38:29.311696 containerd[1455]: time="2025-05-09T00:38:29.311648941Z" level=info msg="StartContainer for \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\" returns successfully" May 9 00:38:29.325282 containerd[1455]: time="2025-05-09T00:38:29.325170859Z" level=info msg="CreateContainer within sandbox \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\"" May 9 00:38:29.329952 containerd[1455]: time="2025-05-09T00:38:29.328683456Z" level=info msg="StartContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\"" May 9 00:38:29.337761 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). May 9 00:38:29.376779 containerd[1455]: time="2025-05-09T00:38:29.376712119Z" level=info msg="shim disconnected" id=124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc namespace=k8s.io May 9 00:38:29.376779 containerd[1455]: time="2025-05-09T00:38:29.376769487Z" level=warning msg="cleaning up after shim disconnected" id=124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc namespace=k8s.io May 9 00:38:29.376779 containerd[1455]: time="2025-05-09T00:38:29.376779286Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:29.388479 systemd[1]: Started cri-containerd-415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23.scope - libcontainer container 415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23. May 9 00:38:29.389434 sshd[3119]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:29.391404 sshd[3119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:29.396787 systemd-logind[1442]: New session 8 of user core. May 9 00:38:29.404479 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:38:29.421830 containerd[1455]: time="2025-05-09T00:38:29.421789875Z" level=info msg="StartContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" returns successfully" May 9 00:38:29.590694 sshd[3119]: pam_unix(sshd:session): session closed for user core May 9 00:38:29.594860 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. May 9 00:38:29.597801 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:33122.service: Deactivated successfully. May 9 00:38:29.601009 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:38:29.603094 systemd-logind[1442]: Removed session 8. May 9 00:38:29.953898 kubelet[2470]: E0509 00:38:29.953743 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:29.959358 kubelet[2470]: E0509 00:38:29.958738 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:29.963307 containerd[1455]: time="2025-05-09T00:38:29.962813965Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:38:29.990576 containerd[1455]: time="2025-05-09T00:38:29.990522806Z" level=info msg="CreateContainer within sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\"" May 9 00:38:29.991248 containerd[1455]: time="2025-05-09T00:38:29.991218909Z" level=info msg="StartContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\"" May 9 00:38:30.047534 systemd[1]: Started cri-containerd-6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230.scope - libcontainer container 6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230. May 9 00:38:30.089164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc-rootfs.mount: Deactivated successfully. May 9 00:38:30.092136 containerd[1455]: time="2025-05-09T00:38:30.091867034Z" level=info msg="StartContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" returns successfully" May 9 00:38:30.230274 kubelet[2470]: I0509 00:38:30.229839 2470 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 00:38:30.249298 kubelet[2470]: I0509 00:38:30.249236 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-f6vkh" podStartSLOduration=2.50580914 podStartE2EDuration="19.249217892s" podCreationTimestamp="2025-05-09 00:38:11 +0000 UTC" firstStartedPulling="2025-05-09 00:38:12.364309766 +0000 UTC m=+6.812171902" lastFinishedPulling="2025-05-09 00:38:29.107718518 +0000 UTC m=+23.555580654" observedRunningTime="2025-05-09 00:38:29.9880286 +0000 UTC m=+24.435890737" watchObservedRunningTime="2025-05-09 00:38:30.249217892 +0000 UTC m=+24.697080028" May 9 00:38:30.261182 systemd[1]: Created slice kubepods-burstable-pod4fbbb2bd_995c_42a6_a764_c5c3c60d1bb3.slice - libcontainer container kubepods-burstable-pod4fbbb2bd_995c_42a6_a764_c5c3c60d1bb3.slice. May 9 00:38:30.265758 systemd[1]: Created slice kubepods-burstable-pod68282516_1caa_4341_8402_7c977334614d.slice - libcontainer container kubepods-burstable-pod68282516_1caa_4341_8402_7c977334614d.slice. May 9 00:38:30.428395 kubelet[2470]: I0509 00:38:30.428304 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68282516-1caa-4341-8402-7c977334614d-config-volume\") pod \"coredns-6f6b679f8f-wd7pb\" (UID: \"68282516-1caa-4341-8402-7c977334614d\") " pod="kube-system/coredns-6f6b679f8f-wd7pb" May 9 00:38:30.428395 kubelet[2470]: I0509 00:38:30.428388 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3-config-volume\") pod \"coredns-6f6b679f8f-5vvb4\" (UID: \"4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3\") " pod="kube-system/coredns-6f6b679f8f-5vvb4" May 9 00:38:30.428602 kubelet[2470]: I0509 00:38:30.428428 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdcg\" (UniqueName: \"kubernetes.io/projected/68282516-1caa-4341-8402-7c977334614d-kube-api-access-9wdcg\") pod \"coredns-6f6b679f8f-wd7pb\" (UID: \"68282516-1caa-4341-8402-7c977334614d\") " pod="kube-system/coredns-6f6b679f8f-wd7pb" May 9 00:38:30.428602 kubelet[2470]: I0509 00:38:30.428456 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jx8\" (UniqueName: \"kubernetes.io/projected/4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3-kube-api-access-m4jx8\") pod \"coredns-6f6b679f8f-5vvb4\" (UID: \"4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3\") " pod="kube-system/coredns-6f6b679f8f-5vvb4" May 9 00:38:30.564002 kubelet[2470]: E0509 00:38:30.563945 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:30.569458 kubelet[2470]: E0509 00:38:30.569408 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:30.571378 containerd[1455]: time="2025-05-09T00:38:30.571346562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wd7pb,Uid:68282516-1caa-4341-8402-7c977334614d,Namespace:kube-system,Attempt:0,}" May 9 00:38:30.578978 containerd[1455]: time="2025-05-09T00:38:30.578930133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5vvb4,Uid:4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3,Namespace:kube-system,Attempt:0,}" May 9 00:38:30.963432 kubelet[2470]: E0509 00:38:30.963243 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:30.963432 kubelet[2470]: E0509 00:38:30.963283 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:30.976210 kubelet[2470]: I0509 00:38:30.976163 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tj2ph" podStartSLOduration=6.429843664 podStartE2EDuration="19.97613504s" podCreationTimestamp="2025-05-09 00:38:11 +0000 UTC" firstStartedPulling="2025-05-09 00:38:12.344404109 +0000 UTC m=+6.792266245" lastFinishedPulling="2025-05-09 00:38:25.890695485 +0000 UTC m=+20.338557621" observedRunningTime="2025-05-09 00:38:30.975495023 +0000 UTC m=+25.423357169" watchObservedRunningTime="2025-05-09 00:38:30.97613504 +0000 UTC m=+25.423997176" May 9 00:38:31.965248 kubelet[2470]: E0509 00:38:31.965207 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:32.967151 kubelet[2470]: E0509 00:38:32.967102 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:33.110232 systemd-networkd[1389]: cilium_host: Link UP May 9 00:38:33.111120 systemd-networkd[1389]: cilium_net: Link UP May 9 00:38:33.111355 systemd-networkd[1389]: cilium_net: Gained carrier May 9 00:38:33.111553 systemd-networkd[1389]: cilium_host: Gained carrier May 9 00:38:33.159427 systemd-networkd[1389]: cilium_net: Gained IPv6LL May 9 00:38:33.217600 systemd-networkd[1389]: cilium_vxlan: Link UP May 9 00:38:33.217811 systemd-networkd[1389]: cilium_vxlan: Gained carrier May 9 00:38:33.339526 systemd-networkd[1389]: cilium_host: Gained IPv6LL May 9 00:38:33.472366 kernel: NET: Registered PF_ALG protocol family May 9 00:38:33.968974 kubelet[2470]: E0509 00:38:33.968934 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:34.144426 systemd-networkd[1389]: lxc_health: Link UP May 9 00:38:34.145978 systemd-networkd[1389]: lxc_health: Gained carrier May 9 00:38:34.599540 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:33138.service - OpenSSH per-connection server daemon (10.0.0.1:33138). May 9 00:38:34.642382 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 33138 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:34.644502 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:34.648527 systemd-logind[1442]: New session 9 of user core. May 9 00:38:34.659478 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:38:34.684737 systemd-networkd[1389]: lxc05d9c704d8e6: Link UP May 9 00:38:34.696359 kernel: eth0: renamed from tmpf47fb May 9 00:38:34.702586 systemd-networkd[1389]: lxc8c37ca232fd4: Link UP May 9 00:38:34.716942 systemd-networkd[1389]: lxc05d9c704d8e6: Gained carrier May 9 00:38:34.720670 kernel: eth0: renamed from tmp2d4a6 May 9 00:38:34.731783 systemd-networkd[1389]: lxc8c37ca232fd4: Gained carrier May 9 00:38:34.804862 sshd[3688]: pam_unix(sshd:session): session closed for user core May 9 00:38:34.807626 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. May 9 00:38:34.807973 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:33138.service: Deactivated successfully. May 9 00:38:34.810059 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:38:34.812636 systemd-logind[1442]: Removed session 9. May 9 00:38:34.813199 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL May 9 00:38:34.971067 kubelet[2470]: E0509 00:38:34.970963 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:35.579505 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 9 00:38:35.972998 kubelet[2470]: E0509 00:38:35.972846 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:36.475495 systemd-networkd[1389]: lxc05d9c704d8e6: Gained IPv6LL May 9 00:38:36.795505 systemd-networkd[1389]: lxc8c37ca232fd4: Gained IPv6LL May 9 00:38:38.106265 containerd[1455]: time="2025-05-09T00:38:38.105988855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:38.106265 containerd[1455]: time="2025-05-09T00:38:38.106037857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:38.106265 containerd[1455]: time="2025-05-09T00:38:38.106048176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:38.106265 containerd[1455]: time="2025-05-09T00:38:38.106122166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:38.112955 containerd[1455]: time="2025-05-09T00:38:38.112483667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:38.112955 containerd[1455]: time="2025-05-09T00:38:38.112535754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:38.112955 containerd[1455]: time="2025-05-09T00:38:38.112559560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:38.112955 containerd[1455]: time="2025-05-09T00:38:38.112650510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:38.134460 systemd[1]: Started cri-containerd-2d4a675c5e3071d8b573a0e18c6c20932b427b46a22804b34666596c7f661949.scope - libcontainer container 2d4a675c5e3071d8b573a0e18c6c20932b427b46a22804b34666596c7f661949. May 9 00:38:38.138898 systemd[1]: Started cri-containerd-f47fbeb2f13ec7697e0bd6985d954d96c252b1f5268b172cb560285a8f7d309a.scope - libcontainer container f47fbeb2f13ec7697e0bd6985d954d96c252b1f5268b172cb560285a8f7d309a. May 9 00:38:38.147742 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:38:38.155448 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:38:38.176307 containerd[1455]: time="2025-05-09T00:38:38.176262379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5vvb4,Uid:4fbbb2bd-995c-42a6-a764-c5c3c60d1bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d4a675c5e3071d8b573a0e18c6c20932b427b46a22804b34666596c7f661949\"" May 9 00:38:38.177219 kubelet[2470]: E0509 00:38:38.177171 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:38.180579 containerd[1455]: time="2025-05-09T00:38:38.180503611Z" level=info msg="CreateContainer within sandbox \"2d4a675c5e3071d8b573a0e18c6c20932b427b46a22804b34666596c7f661949\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:38:38.183006 containerd[1455]: time="2025-05-09T00:38:38.182944544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wd7pb,Uid:68282516-1caa-4341-8402-7c977334614d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f47fbeb2f13ec7697e0bd6985d954d96c252b1f5268b172cb560285a8f7d309a\"" May 9 00:38:38.184025 kubelet[2470]: E0509 00:38:38.183780 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:38.185643 containerd[1455]: time="2025-05-09T00:38:38.185614037Z" level=info msg="CreateContainer within sandbox \"f47fbeb2f13ec7697e0bd6985d954d96c252b1f5268b172cb560285a8f7d309a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:38:38.202112 containerd[1455]: time="2025-05-09T00:38:38.202064102Z" level=info msg="CreateContainer within sandbox \"2d4a675c5e3071d8b573a0e18c6c20932b427b46a22804b34666596c7f661949\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5d12f62f84790d438c7892c6821ab08a566f0565a6bc4282811d1d2d1a90799\"" May 9 00:38:38.202641 containerd[1455]: time="2025-05-09T00:38:38.202591294Z" level=info msg="StartContainer for \"c5d12f62f84790d438c7892c6821ab08a566f0565a6bc4282811d1d2d1a90799\"" May 9 00:38:38.205695 containerd[1455]: time="2025-05-09T00:38:38.205651051Z" level=info msg="CreateContainer within sandbox \"f47fbeb2f13ec7697e0bd6985d954d96c252b1f5268b172cb560285a8f7d309a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb1a131bdea94e2b6cf35aaf02ffe044a5ac31496279e756d88fc0a70f33fa35\"" May 9 00:38:38.206385 containerd[1455]: time="2025-05-09T00:38:38.206163425Z" level=info msg="StartContainer for \"cb1a131bdea94e2b6cf35aaf02ffe044a5ac31496279e756d88fc0a70f33fa35\"" May 9 00:38:38.231474 systemd[1]: Started cri-containerd-c5d12f62f84790d438c7892c6821ab08a566f0565a6bc4282811d1d2d1a90799.scope - libcontainer container c5d12f62f84790d438c7892c6821ab08a566f0565a6bc4282811d1d2d1a90799. May 9 00:38:38.234778 systemd[1]: Started cri-containerd-cb1a131bdea94e2b6cf35aaf02ffe044a5ac31496279e756d88fc0a70f33fa35.scope - libcontainer container cb1a131bdea94e2b6cf35aaf02ffe044a5ac31496279e756d88fc0a70f33fa35. May 9 00:38:38.261092 containerd[1455]: time="2025-05-09T00:38:38.261042822Z" level=info msg="StartContainer for \"c5d12f62f84790d438c7892c6821ab08a566f0565a6bc4282811d1d2d1a90799\" returns successfully" May 9 00:38:38.264286 containerd[1455]: time="2025-05-09T00:38:38.264230410Z" level=info msg="StartContainer for \"cb1a131bdea94e2b6cf35aaf02ffe044a5ac31496279e756d88fc0a70f33fa35\" returns successfully" May 9 00:38:38.980419 kubelet[2470]: E0509 00:38:38.980121 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:38.981878 kubelet[2470]: E0509 00:38:38.981841 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:39.123299 kubelet[2470]: I0509 00:38:39.123227 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5vvb4" podStartSLOduration=28.123203707 podStartE2EDuration="28.123203707s" podCreationTimestamp="2025-05-09 00:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:39.052912636 +0000 UTC m=+33.500774772" watchObservedRunningTime="2025-05-09 00:38:39.123203707 +0000 UTC m=+33.571065853" May 9 00:38:39.818286 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:38610.service - OpenSSH per-connection server daemon (10.0.0.1:38610). May 9 00:38:39.860007 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 38610 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:39.861776 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:39.865802 systemd-logind[1442]: New session 10 of user core. May 9 00:38:39.876446 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:38:39.983578 kubelet[2470]: E0509 00:38:39.983499 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:39.983578 kubelet[2470]: E0509 00:38:39.983576 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:40.025552 sshd[3905]: pam_unix(sshd:session): session closed for user core May 9 00:38:40.029118 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:38610.service: Deactivated successfully. May 9 00:38:40.030851 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:38:40.031539 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. May 9 00:38:40.032375 systemd-logind[1442]: Removed session 10. May 9 00:38:40.985348 kubelet[2470]: E0509 00:38:40.985306 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:40.985348 kubelet[2470]: E0509 00:38:40.985328 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:45.037669 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:60954.service - OpenSSH per-connection server daemon (10.0.0.1:60954). May 9 00:38:45.077174 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 60954 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:45.078870 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:45.083188 systemd-logind[1442]: New session 11 of user core. May 9 00:38:45.096593 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:38:45.210349 sshd[3923]: pam_unix(sshd:session): session closed for user core May 9 00:38:45.228695 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:60954.service: Deactivated successfully. May 9 00:38:45.231207 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:38:45.233132 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. May 9 00:38:45.241609 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:60964.service - OpenSSH per-connection server daemon (10.0.0.1:60964). May 9 00:38:45.242511 systemd-logind[1442]: Removed session 11. May 9 00:38:45.278244 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 60964 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:45.280035 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:45.284547 systemd-logind[1442]: New session 12 of user core. May 9 00:38:45.294499 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:38:45.533353 sshd[3938]: pam_unix(sshd:session): session closed for user core May 9 00:38:45.542079 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:60964.service: Deactivated successfully. May 9 00:38:45.543734 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:38:45.545531 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. May 9 00:38:45.554590 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:60980.service - OpenSSH per-connection server daemon (10.0.0.1:60980). May 9 00:38:45.555519 systemd-logind[1442]: Removed session 12. May 9 00:38:45.593150 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 60980 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:45.594685 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:45.598551 systemd-logind[1442]: New session 13 of user core. May 9 00:38:45.607484 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:38:45.728782 sshd[3951]: pam_unix(sshd:session): session closed for user core May 9 00:38:45.732954 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:60980.service: Deactivated successfully. May 9 00:38:45.734976 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:38:45.735712 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. May 9 00:38:45.736653 systemd-logind[1442]: Removed session 13. May 9 00:38:50.743464 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:60994.service - OpenSSH per-connection server daemon (10.0.0.1:60994). May 9 00:38:50.784868 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 60994 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:50.786656 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:50.790710 systemd-logind[1442]: New session 14 of user core. May 9 00:38:50.800452 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:38:50.915842 sshd[3968]: pam_unix(sshd:session): session closed for user core May 9 00:38:50.920061 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:60994.service: Deactivated successfully. May 9 00:38:50.922456 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:38:50.923148 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. May 9 00:38:50.924049 systemd-logind[1442]: Removed session 14. May 9 00:38:55.931080 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:51578.service - OpenSSH per-connection server daemon (10.0.0.1:51578). May 9 00:38:55.977789 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 51578 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:38:55.979222 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:55.983020 systemd-logind[1442]: New session 15 of user core. May 9 00:38:55.996485 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:38:56.100058 sshd[3983]: pam_unix(sshd:session): session closed for user core May 9 00:38:56.103842 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:51578.service: Deactivated successfully. May 9 00:38:56.105806 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:38:56.106503 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. May 9 00:38:56.107479 systemd-logind[1442]: Removed session 15. May 9 00:39:01.112374 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:51592.service - OpenSSH per-connection server daemon (10.0.0.1:51592). May 9 00:39:01.151288 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 51592 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:01.152833 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:01.156427 systemd-logind[1442]: New session 16 of user core. May 9 00:39:01.166427 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:39:01.271462 sshd[3998]: pam_unix(sshd:session): session closed for user core May 9 00:39:01.282302 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:51592.service: Deactivated successfully. May 9 00:39:01.284238 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:39:01.285874 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. May 9 00:39:01.290564 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:51604.service - OpenSSH per-connection server daemon (10.0.0.1:51604). May 9 00:39:01.291434 systemd-logind[1442]: Removed session 16. May 9 00:39:01.326087 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 51604 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:01.327609 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:01.331447 systemd-logind[1442]: New session 17 of user core. May 9 00:39:01.339459 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:39:01.657626 sshd[4012]: pam_unix(sshd:session): session closed for user core May 9 00:39:01.665425 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:51604.service: Deactivated successfully. May 9 00:39:01.667386 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:39:01.669045 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. May 9 00:39:01.684628 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:51616.service - OpenSSH per-connection server daemon (10.0.0.1:51616). May 9 00:39:01.685668 systemd-logind[1442]: Removed session 17. May 9 00:39:01.722525 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 51616 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:01.724175 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:01.728262 systemd-logind[1442]: New session 18 of user core. May 9 00:39:01.739451 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:39:03.102623 sshd[4024]: pam_unix(sshd:session): session closed for user core May 9 00:39:03.111475 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:51616.service: Deactivated successfully. May 9 00:39:03.113363 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:39:03.115453 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. May 9 00:39:03.123692 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:51626.service - OpenSSH per-connection server daemon (10.0.0.1:51626). May 9 00:39:03.125853 systemd-logind[1442]: Removed session 18. May 9 00:39:03.157553 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 51626 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:03.159150 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:03.163285 systemd-logind[1442]: New session 19 of user core. May 9 00:39:03.179510 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:39:03.396981 sshd[4043]: pam_unix(sshd:session): session closed for user core May 9 00:39:03.406042 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:51626.service: Deactivated successfully. May 9 00:39:03.408417 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:39:03.409164 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. May 9 00:39:03.419810 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:51636.service - OpenSSH per-connection server daemon (10.0.0.1:51636). May 9 00:39:03.421265 systemd-logind[1442]: Removed session 19. May 9 00:39:03.454268 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 51636 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:03.455740 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:03.459770 systemd-logind[1442]: New session 20 of user core. May 9 00:39:03.469457 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:39:03.574351 sshd[4056]: pam_unix(sshd:session): session closed for user core May 9 00:39:03.578822 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:51636.service: Deactivated successfully. May 9 00:39:03.580951 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:39:03.581574 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. May 9 00:39:03.582391 systemd-logind[1442]: Removed session 20. May 9 00:39:08.586272 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:54652.service - OpenSSH per-connection server daemon (10.0.0.1:54652). May 9 00:39:08.624499 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 54652 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:08.625962 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:08.629583 systemd-logind[1442]: New session 21 of user core. May 9 00:39:08.641451 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:39:08.748224 sshd[4075]: pam_unix(sshd:session): session closed for user core May 9 00:39:08.752594 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:54652.service: Deactivated successfully. May 9 00:39:08.754794 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:39:08.755455 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. May 9 00:39:08.756297 systemd-logind[1442]: Removed session 21. May 9 00:39:13.759416 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). May 9 00:39:13.798453 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:13.800037 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:13.803576 systemd-logind[1442]: New session 22 of user core. May 9 00:39:13.813446 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:39:13.915613 sshd[4091]: pam_unix(sshd:session): session closed for user core May 9 00:39:13.919859 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:54668.service: Deactivated successfully. May 9 00:39:13.921933 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:39:13.922507 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. May 9 00:39:13.923410 systemd-logind[1442]: Removed session 22. May 9 00:39:15.658401 kubelet[2470]: E0509 00:39:15.658360 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:18.927374 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:45672.service - OpenSSH per-connection server daemon (10.0.0.1:45672). May 9 00:39:18.965690 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 45672 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:18.967203 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:18.970839 systemd-logind[1442]: New session 23 of user core. May 9 00:39:18.985470 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:39:19.090968 sshd[4106]: pam_unix(sshd:session): session closed for user core May 9 00:39:19.094984 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:45672.service: Deactivated successfully. May 9 00:39:19.097037 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:39:19.097770 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. May 9 00:39:19.098625 systemd-logind[1442]: Removed session 23. May 9 00:39:19.658209 kubelet[2470]: E0509 00:39:19.658172 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:24.106419 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:45676.service - OpenSSH per-connection server daemon (10.0.0.1:45676). May 9 00:39:24.144803 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 45676 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:24.146266 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:24.149868 systemd-logind[1442]: New session 24 of user core. May 9 00:39:24.155552 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:39:24.256528 sshd[4120]: pam_unix(sshd:session): session closed for user core May 9 00:39:24.264247 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:45676.service: Deactivated successfully. May 9 00:39:24.266202 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:39:24.267533 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. May 9 00:39:24.279565 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:45690.service - OpenSSH per-connection server daemon (10.0.0.1:45690). May 9 00:39:24.280618 systemd-logind[1442]: Removed session 24. May 9 00:39:24.313585 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 45690 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:24.315155 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:24.319048 systemd-logind[1442]: New session 25 of user core. May 9 00:39:24.328452 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:39:25.653170 kubelet[2470]: I0509 00:39:25.652095 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wd7pb" podStartSLOduration=74.652076998 podStartE2EDuration="1m14.652076998s" podCreationTimestamp="2025-05-09 00:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:39.406350613 +0000 UTC m=+33.854212749" watchObservedRunningTime="2025-05-09 00:39:25.652076998 +0000 UTC m=+80.099939134" May 9 00:39:25.665474 containerd[1455]: time="2025-05-09T00:39:25.665303007Z" level=info msg="StopContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" with timeout 30 (s)" May 9 00:39:25.669431 containerd[1455]: time="2025-05-09T00:39:25.668423530Z" level=info msg="Stop container \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" with signal terminated" May 9 00:39:25.679982 systemd[1]: cri-containerd-415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23.scope: Deactivated successfully. May 9 00:39:25.695021 containerd[1455]: time="2025-05-09T00:39:25.694978222Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:39:25.700525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23-rootfs.mount: Deactivated successfully. May 9 00:39:25.702394 containerd[1455]: time="2025-05-09T00:39:25.702310961Z" level=info msg="StopContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" with timeout 2 (s)" May 9 00:39:25.702566 containerd[1455]: time="2025-05-09T00:39:25.702550661Z" level=info msg="Stop container \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" with signal terminated" May 9 00:39:25.708792 systemd-networkd[1389]: lxc_health: Link DOWN May 9 00:39:25.708803 systemd-networkd[1389]: lxc_health: Lost carrier May 9 00:39:25.711021 containerd[1455]: time="2025-05-09T00:39:25.710785399Z" level=info msg="shim disconnected" id=415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23 namespace=k8s.io May 9 00:39:25.711021 containerd[1455]: time="2025-05-09T00:39:25.710838941Z" level=warning msg="cleaning up after shim disconnected" id=415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23 namespace=k8s.io May 9 00:39:25.711021 containerd[1455]: time="2025-05-09T00:39:25.710847719Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:25.719040 kubelet[2470]: E0509 00:39:25.719000 2470 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:39:25.727702 containerd[1455]: time="2025-05-09T00:39:25.727659993Z" level=info msg="StopContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" returns successfully" May 9 00:39:25.731405 containerd[1455]: time="2025-05-09T00:39:25.731372341Z" level=info msg="StopPodSandbox for \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\"" May 9 00:39:25.731464 containerd[1455]: time="2025-05-09T00:39:25.731408850Z" level=info msg="Container to stop \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.733755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e-shm.mount: Deactivated successfully. May 9 00:39:25.737838 systemd[1]: cri-containerd-bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e.scope: Deactivated successfully. May 9 00:39:25.739844 systemd[1]: cri-containerd-6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230.scope: Deactivated successfully. May 9 00:39:25.740108 systemd[1]: cri-containerd-6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230.scope: Consumed 6.866s CPU time. May 9 00:39:25.758868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230-rootfs.mount: Deactivated successfully. May 9 00:39:25.763770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e-rootfs.mount: Deactivated successfully. May 9 00:39:25.766567 containerd[1455]: time="2025-05-09T00:39:25.766489189Z" level=info msg="shim disconnected" id=bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e namespace=k8s.io May 9 00:39:25.766567 containerd[1455]: time="2025-05-09T00:39:25.766561448Z" level=warning msg="cleaning up after shim disconnected" id=bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e namespace=k8s.io May 9 00:39:25.766567 containerd[1455]: time="2025-05-09T00:39:25.766570405Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:25.766707 containerd[1455]: time="2025-05-09T00:39:25.766489179Z" level=info msg="shim disconnected" id=6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230 namespace=k8s.io May 9 00:39:25.766707 containerd[1455]: time="2025-05-09T00:39:25.766682590Z" level=warning msg="cleaning up after shim disconnected" id=6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230 namespace=k8s.io May 9 00:39:25.766707 containerd[1455]: time="2025-05-09T00:39:25.766691858Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:25.788313 containerd[1455]: time="2025-05-09T00:39:25.788262314Z" level=info msg="StopContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" returns successfully" May 9 00:39:25.788771 containerd[1455]: time="2025-05-09T00:39:25.788750912Z" level=info msg="StopPodSandbox for \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\"" May 9 00:39:25.788827 containerd[1455]: time="2025-05-09T00:39:25.788779336Z" level=info msg="Container to stop \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.788827 containerd[1455]: time="2025-05-09T00:39:25.788791679Z" level=info msg="Container to stop \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.788827 containerd[1455]: time="2025-05-09T00:39:25.788801980Z" level=info msg="Container to stop \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.788827 containerd[1455]: time="2025-05-09T00:39:25.788810666Z" level=info msg="Container to stop \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.788827 containerd[1455]: time="2025-05-09T00:39:25.788818962Z" level=info msg="Container to stop \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:39:25.792742 containerd[1455]: time="2025-05-09T00:39:25.792701526Z" level=info msg="TearDown network for sandbox \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\" successfully" May 9 00:39:25.792742 containerd[1455]: time="2025-05-09T00:39:25.792740220Z" level=info msg="StopPodSandbox for \"bdf28e2a678fcd1b649f58afa7187d9db2b22cd895250f324d49dffdd25ec41e\" returns successfully" May 9 00:39:25.794917 systemd[1]: cri-containerd-af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0.scope: Deactivated successfully. May 9 00:39:25.817186 containerd[1455]: time="2025-05-09T00:39:25.816924448Z" level=info msg="shim disconnected" id=af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0 namespace=k8s.io May 9 00:39:25.817186 containerd[1455]: time="2025-05-09T00:39:25.816987590Z" level=warning msg="cleaning up after shim disconnected" id=af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0 namespace=k8s.io May 9 00:39:25.817186 containerd[1455]: time="2025-05-09T00:39:25.816996256Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:25.831301 containerd[1455]: time="2025-05-09T00:39:25.831087062Z" level=info msg="TearDown network for sandbox \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" successfully" May 9 00:39:25.831301 containerd[1455]: time="2025-05-09T00:39:25.831123682Z" level=info msg="StopPodSandbox for \"af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0\" returns successfully" May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942448 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-kernel\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942480 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cni-path\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942506 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mshh\" (UniqueName: \"kubernetes.io/projected/30e08718-53c0-4365-98d7-b74ca9fe035a-kube-api-access-8mshh\") pod \"30e08718-53c0-4365-98d7-b74ca9fe035a\" (UID: \"30e08718-53c0-4365-98d7-b74ca9fe035a\") " May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942523 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-hubble-tls\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942536 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-run\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942551 kubelet[2470]: I0509 00:39:25.942552 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30e08718-53c0-4365-98d7-b74ca9fe035a-cilium-config-path\") pod \"30e08718-53c0-4365-98d7-b74ca9fe035a\" (UID: \"30e08718-53c0-4365-98d7-b74ca9fe035a\") " May 9 00:39:25.942787 kubelet[2470]: I0509 00:39:25.942567 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-net\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942787 kubelet[2470]: I0509 00:39:25.942583 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-cgroup\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942787 kubelet[2470]: I0509 00:39:25.942594 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.942787 kubelet[2470]: I0509 00:39:25.942620 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-etc-cni-netd\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942787 kubelet[2470]: I0509 00:39:25.942660 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942685 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942684 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-config-path\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942709 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzrw2\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-kube-api-access-nzrw2\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942725 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54deb056-bbd1-4f23-ade0-34d13e893f9f-clustermesh-secrets\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942740 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-hostproc\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.942920 kubelet[2470]: I0509 00:39:25.942754 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-lib-modules\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942767 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-bpf-maps\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942782 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-xtables-lock\") pod \"54deb056-bbd1-4f23-ade0-34d13e893f9f\" (UID: \"54deb056-bbd1-4f23-ade0-34d13e893f9f\") " May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942811 2470 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942820 2470 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942830 2470 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 00:39:25.943061 kubelet[2470]: I0509 00:39:25.942849 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.943204 kubelet[2470]: I0509 00:39:25.942922 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946213 kubelet[2470]: I0509 00:39:25.946178 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54deb056-bbd1-4f23-ade0-34d13e893f9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 00:39:25.946263 kubelet[2470]: I0509 00:39:25.946221 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946263 kubelet[2470]: I0509 00:39:25.946237 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946887 kubelet[2470]: I0509 00:39:25.946744 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e08718-53c0-4365-98d7-b74ca9fe035a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30e08718-53c0-4365-98d7-b74ca9fe035a" (UID: "30e08718-53c0-4365-98d7-b74ca9fe035a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:39:25.946887 kubelet[2470]: I0509 00:39:25.946788 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946887 kubelet[2470]: I0509 00:39:25.946810 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946887 kubelet[2470]: I0509 00:39:25.946826 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:39:25.946887 kubelet[2470]: I0509 00:39:25.946834 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:39:25.948261 kubelet[2470]: I0509 00:39:25.948211 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:39:25.948850 kubelet[2470]: I0509 00:39:25.948816 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-kube-api-access-nzrw2" (OuterVolumeSpecName: "kube-api-access-nzrw2") pod "54deb056-bbd1-4f23-ade0-34d13e893f9f" (UID: "54deb056-bbd1-4f23-ade0-34d13e893f9f"). InnerVolumeSpecName "kube-api-access-nzrw2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:39:25.949412 kubelet[2470]: I0509 00:39:25.949379 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e08718-53c0-4365-98d7-b74ca9fe035a-kube-api-access-8mshh" (OuterVolumeSpecName: "kube-api-access-8mshh") pod "30e08718-53c0-4365-98d7-b74ca9fe035a" (UID: "30e08718-53c0-4365-98d7-b74ca9fe035a"). InnerVolumeSpecName "kube-api-access-8mshh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:39:26.043188 kubelet[2470]: I0509 00:39:26.043149 2470 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043188 kubelet[2470]: I0509 00:39:26.043174 2470 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043188 kubelet[2470]: I0509 00:39:26.043184 2470 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8mshh\" (UniqueName: \"kubernetes.io/projected/30e08718-53c0-4365-98d7-b74ca9fe035a-kube-api-access-8mshh\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043188 kubelet[2470]: I0509 00:39:26.043196 2470 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043207 2470 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30e08718-53c0-4365-98d7-b74ca9fe035a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043218 2470 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043226 2470 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043234 2470 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54deb056-bbd1-4f23-ade0-34d13e893f9f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043242 2470 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54deb056-bbd1-4f23-ade0-34d13e893f9f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043249 2470 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nzrw2\" (UniqueName: \"kubernetes.io/projected/54deb056-bbd1-4f23-ade0-34d13e893f9f-kube-api-access-nzrw2\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043257 2470 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043394 kubelet[2470]: I0509 00:39:26.043265 2470 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.043577 kubelet[2470]: I0509 00:39:26.043272 2470 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54deb056-bbd1-4f23-ade0-34d13e893f9f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 00:39:26.077318 kubelet[2470]: I0509 00:39:26.077287 2470 scope.go:117] "RemoveContainer" containerID="415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23" May 9 00:39:26.078535 containerd[1455]: time="2025-05-09T00:39:26.078485553Z" level=info msg="RemoveContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\"" May 9 00:39:26.086127 systemd[1]: Removed slice kubepods-besteffort-pod30e08718_53c0_4365_98d7_b74ca9fe035a.slice - libcontainer container kubepods-besteffort-pod30e08718_53c0_4365_98d7_b74ca9fe035a.slice. May 9 00:39:26.088173 systemd[1]: Removed slice kubepods-burstable-pod54deb056_bbd1_4f23_ade0_34d13e893f9f.slice - libcontainer container kubepods-burstable-pod54deb056_bbd1_4f23_ade0_34d13e893f9f.slice. May 9 00:39:26.088257 systemd[1]: kubepods-burstable-pod54deb056_bbd1_4f23_ade0_34d13e893f9f.slice: Consumed 6.968s CPU time. May 9 00:39:26.095660 containerd[1455]: time="2025-05-09T00:39:26.095591491Z" level=info msg="RemoveContainer for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" returns successfully" May 9 00:39:26.096454 kubelet[2470]: I0509 00:39:26.096354 2470 scope.go:117] "RemoveContainer" containerID="415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23" May 9 00:39:26.102081 containerd[1455]: time="2025-05-09T00:39:26.102008138Z" level=error msg="ContainerStatus for \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\": not found" May 9 00:39:26.112220 kubelet[2470]: E0509 00:39:26.112180 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\": not found" containerID="415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23" May 9 00:39:26.112342 kubelet[2470]: I0509 00:39:26.112221 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23"} err="failed to get container status \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\": rpc error: code = NotFound desc = an error occurred when try to find container \"415b5b3b509575d15bb163cfd0f4089136fb04ff93e5800f8864691ee7835a23\": not found" May 9 00:39:26.112342 kubelet[2470]: I0509 00:39:26.112305 2470 scope.go:117] "RemoveContainer" containerID="6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230" May 9 00:39:26.113662 containerd[1455]: time="2025-05-09T00:39:26.113621020Z" level=info msg="RemoveContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\"" May 9 00:39:26.117074 containerd[1455]: time="2025-05-09T00:39:26.117043839Z" level=info msg="RemoveContainer for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" returns successfully" May 9 00:39:26.117278 kubelet[2470]: I0509 00:39:26.117247 2470 scope.go:117] "RemoveContainer" containerID="124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc" May 9 00:39:26.118498 containerd[1455]: time="2025-05-09T00:39:26.118456646Z" level=info msg="RemoveContainer for \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\"" May 9 00:39:26.121625 containerd[1455]: time="2025-05-09T00:39:26.121594619Z" level=info msg="RemoveContainer for \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\" returns successfully" May 9 00:39:26.121786 kubelet[2470]: I0509 00:39:26.121746 2470 scope.go:117] "RemoveContainer" containerID="1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b" May 9 00:39:26.122733 containerd[1455]: time="2025-05-09T00:39:26.122686762Z" level=info msg="RemoveContainer for \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\"" May 9 00:39:26.125869 containerd[1455]: time="2025-05-09T00:39:26.125838982Z" level=info msg="RemoveContainer for \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\" returns successfully" May 9 00:39:26.126110 kubelet[2470]: I0509 00:39:26.125996 2470 scope.go:117] "RemoveContainer" containerID="e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9" May 9 00:39:26.127037 containerd[1455]: time="2025-05-09T00:39:26.127007982Z" level=info msg="RemoveContainer for \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\"" May 9 00:39:26.130283 containerd[1455]: time="2025-05-09T00:39:26.130251267Z" level=info msg="RemoveContainer for \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\" returns successfully" May 9 00:39:26.130461 kubelet[2470]: I0509 00:39:26.130426 2470 scope.go:117] "RemoveContainer" containerID="93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344" May 9 00:39:26.131496 containerd[1455]: time="2025-05-09T00:39:26.131454312Z" level=info msg="RemoveContainer for \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\"" May 9 00:39:26.142759 containerd[1455]: time="2025-05-09T00:39:26.142723526Z" level=info msg="RemoveContainer for \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\" returns successfully" May 9 00:39:26.142946 kubelet[2470]: I0509 00:39:26.142883 2470 scope.go:117] "RemoveContainer" containerID="6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230" May 9 00:39:26.143107 containerd[1455]: time="2025-05-09T00:39:26.143070522Z" level=error msg="ContainerStatus for \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\": not found" May 9 00:39:26.143289 kubelet[2470]: E0509 00:39:26.143248 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\": not found" containerID="6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230" May 9 00:39:26.143379 kubelet[2470]: I0509 00:39:26.143300 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230"} err="failed to get container status \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4d4a8561eb0d5a0bdda53a67934bec9d6f61af6af0175e31d36be60e0230\": not found" May 9 00:39:26.143379 kubelet[2470]: I0509 00:39:26.143345 2470 scope.go:117] "RemoveContainer" containerID="124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc" May 9 00:39:26.143700 containerd[1455]: time="2025-05-09T00:39:26.143666864Z" level=error msg="ContainerStatus for \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\": not found" May 9 00:39:26.143829 kubelet[2470]: E0509 00:39:26.143807 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\": not found" containerID="124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc" May 9 00:39:26.143892 kubelet[2470]: I0509 00:39:26.143828 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc"} err="failed to get container status \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"124969555a8cc85b4d6f9b9cbc22d341389c93a52d73bd9d93704f69cf5be6fc\": not found" May 9 00:39:26.143892 kubelet[2470]: I0509 00:39:26.143843 2470 scope.go:117] "RemoveContainer" containerID="1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b" May 9 00:39:26.144075 containerd[1455]: time="2025-05-09T00:39:26.144036984Z" level=error msg="ContainerStatus for \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\": not found" May 9 00:39:26.144216 kubelet[2470]: E0509 00:39:26.144191 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\": not found" containerID="1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b" May 9 00:39:26.144260 kubelet[2470]: I0509 00:39:26.144225 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b"} err="failed to get container status \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bf68f09b3309a459098766c44ae1fa473508718b5fada70cbe75d246218767b\": not found" May 9 00:39:26.144260 kubelet[2470]: I0509 00:39:26.144253 2470 scope.go:117] "RemoveContainer" containerID="e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9" May 9 00:39:26.144475 containerd[1455]: time="2025-05-09T00:39:26.144444474Z" level=error msg="ContainerStatus for \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\": not found" May 9 00:39:26.144572 kubelet[2470]: E0509 00:39:26.144547 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\": not found" containerID="e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9" May 9 00:39:26.144612 kubelet[2470]: I0509 00:39:26.144575 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9"} err="failed to get container status \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e97f4be61587fec766a40f95672dddd8c12f2981da50773b07e82b3eab82d2c9\": not found" May 9 00:39:26.144612 kubelet[2470]: I0509 00:39:26.144597 2470 scope.go:117] "RemoveContainer" containerID="93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344" May 9 00:39:26.144824 containerd[1455]: time="2025-05-09T00:39:26.144785678Z" level=error msg="ContainerStatus for \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\": not found" May 9 00:39:26.144958 kubelet[2470]: E0509 00:39:26.144928 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\": not found" containerID="93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344" May 9 00:39:26.145021 kubelet[2470]: I0509 00:39:26.144957 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344"} err="failed to get container status \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\": rpc error: code = NotFound desc = an error occurred when try to find container \"93f06858ae20b4ccfc05a0a008f6aa1c44651985705a47d5f331a1c3c4594344\": not found" May 9 00:39:26.677796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0-rootfs.mount: Deactivated successfully. May 9 00:39:26.677924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af6e65adbb26129095548a834fefbecd181f50d1b8e786edab68da5d12f52eb0-shm.mount: Deactivated successfully. May 9 00:39:26.678010 systemd[1]: var-lib-kubelet-pods-30e08718\x2d53c0\x2d4365\x2d98d7\x2db74ca9fe035a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8mshh.mount: Deactivated successfully. May 9 00:39:26.678087 systemd[1]: var-lib-kubelet-pods-54deb056\x2dbbd1\x2d4f23\x2dade0\x2d34d13e893f9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzrw2.mount: Deactivated successfully. May 9 00:39:26.678163 systemd[1]: var-lib-kubelet-pods-54deb056\x2dbbd1\x2d4f23\x2dade0\x2d34d13e893f9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:39:26.678241 systemd[1]: var-lib-kubelet-pods-54deb056\x2dbbd1\x2d4f23\x2dade0\x2d34d13e893f9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:39:26.958280 kubelet[2470]: I0509 00:39:26.958155 2470 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:39:26Z","lastTransitionTime":"2025-05-09T00:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:39:27.629989 sshd[4135]: pam_unix(sshd:session): session closed for user core May 9 00:39:27.638390 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:45690.service: Deactivated successfully. May 9 00:39:27.640300 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:39:27.641959 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. May 9 00:39:27.649563 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:52406.service - OpenSSH per-connection server daemon (10.0.0.1:52406). May 9 00:39:27.650458 systemd-logind[1442]: Removed session 25. May 9 00:39:27.663090 kubelet[2470]: I0509 00:39:27.662749 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e08718-53c0-4365-98d7-b74ca9fe035a" path="/var/lib/kubelet/pods/30e08718-53c0-4365-98d7-b74ca9fe035a/volumes" May 9 00:39:27.663933 kubelet[2470]: I0509 00:39:27.663891 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" path="/var/lib/kubelet/pods/54deb056-bbd1-4f23-ade0-34d13e893f9f/volumes" May 9 00:39:27.690651 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 52406 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:27.692239 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:27.696471 systemd-logind[1442]: New session 26 of user core. May 9 00:39:27.708450 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:39:28.240645 sshd[4296]: pam_unix(sshd:session): session closed for user core May 9 00:39:28.250218 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:52406.service: Deactivated successfully. May 9 00:39:28.253463 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:39:28.255308 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255899 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="mount-cgroup" May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255929 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="clean-cilium-state" May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255936 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="cilium-agent" May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255943 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="apply-sysctl-overwrites" May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255952 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="mount-bpf-fs" May 9 00:39:28.256594 kubelet[2470]: E0509 00:39:28.255958 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30e08718-53c0-4365-98d7-b74ca9fe035a" containerName="cilium-operator" May 9 00:39:28.256594 kubelet[2470]: I0509 00:39:28.255986 2470 memory_manager.go:354] "RemoveStaleState removing state" podUID="54deb056-bbd1-4f23-ade0-34d13e893f9f" containerName="cilium-agent" May 9 00:39:28.256594 kubelet[2470]: I0509 00:39:28.255993 2470 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e08718-53c0-4365-98d7-b74ca9fe035a" containerName="cilium-operator" May 9 00:39:28.265624 systemd[1]: Started sshd@26-10.0.0.142:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). May 9 00:39:28.269543 systemd-logind[1442]: Removed session 26. May 9 00:39:28.277299 systemd[1]: Created slice kubepods-burstable-pod9c4031d8_3d2f_4f14_ba16_5ff7374ac4d1.slice - libcontainer container kubepods-burstable-pod9c4031d8_3d2f_4f14_ba16_5ff7374ac4d1.slice. May 9 00:39:28.302078 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:28.303796 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:28.307807 systemd-logind[1442]: New session 27 of user core. May 9 00:39:28.318448 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 00:39:28.356161 kubelet[2470]: I0509 00:39:28.356127 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-hubble-tls\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356228 kubelet[2470]: I0509 00:39:28.356166 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-lib-modules\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356228 kubelet[2470]: I0509 00:39:28.356182 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-xtables-lock\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356228 kubelet[2470]: I0509 00:39:28.356199 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2wj\" (UniqueName: \"kubernetes.io/projected/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-kube-api-access-2c2wj\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356228 kubelet[2470]: I0509 00:39:28.356219 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-bpf-maps\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356320 kubelet[2470]: I0509 00:39:28.356237 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-etc-cni-netd\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356376 kubelet[2470]: I0509 00:39:28.356327 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-clustermesh-secrets\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356428 kubelet[2470]: I0509 00:39:28.356407 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-host-proc-sys-net\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356451 kubelet[2470]: I0509 00:39:28.356435 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-host-proc-sys-kernel\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356487 kubelet[2470]: I0509 00:39:28.356455 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-cilium-run\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356487 kubelet[2470]: I0509 00:39:28.356476 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-hostproc\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356532 kubelet[2470]: I0509 00:39:28.356491 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-cilium-cgroup\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356532 kubelet[2470]: I0509 00:39:28.356505 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-cni-path\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356532 kubelet[2470]: I0509 00:39:28.356519 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-cilium-ipsec-secrets\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.356611 kubelet[2470]: I0509 00:39:28.356539 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1-cilium-config-path\") pod \"cilium-hszrp\" (UID: \"9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1\") " pod="kube-system/cilium-hszrp" May 9 00:39:28.369761 sshd[4309]: pam_unix(sshd:session): session closed for user core May 9 00:39:28.386364 systemd[1]: sshd@26-10.0.0.142:22-10.0.0.1:52416.service: Deactivated successfully. May 9 00:39:28.388286 systemd[1]: session-27.scope: Deactivated successfully. May 9 00:39:28.390053 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. May 9 00:39:28.394582 systemd[1]: Started sshd@27-10.0.0.142:22-10.0.0.1:52422.service - OpenSSH per-connection server daemon (10.0.0.1:52422). May 9 00:39:28.395415 systemd-logind[1442]: Removed session 27. May 9 00:39:28.430080 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 52422 ssh2: RSA SHA256:DDCZN0plE/BEMxYIQ2bJR31edNsyikwHOR0emkziD+w May 9 00:39:28.431660 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:28.435271 systemd-logind[1442]: New session 28 of user core. May 9 00:39:28.451430 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 00:39:28.582591 kubelet[2470]: E0509 00:39:28.582549 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:28.583151 containerd[1455]: time="2025-05-09T00:39:28.583108530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hszrp,Uid:9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1,Namespace:kube-system,Attempt:0,}" May 9 00:39:28.604170 containerd[1455]: time="2025-05-09T00:39:28.604074522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:39:28.604170 containerd[1455]: time="2025-05-09T00:39:28.604141900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:39:28.604170 containerd[1455]: time="2025-05-09T00:39:28.604154495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:28.604374 containerd[1455]: time="2025-05-09T00:39:28.604258454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:28.622481 systemd[1]: Started cri-containerd-0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8.scope - libcontainer container 0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8. May 9 00:39:28.642484 containerd[1455]: time="2025-05-09T00:39:28.642442483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hszrp,Uid:9c4031d8-3d2f-4f14-ba16-5ff7374ac4d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\"" May 9 00:39:28.643473 kubelet[2470]: E0509 00:39:28.643447 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:28.645574 containerd[1455]: time="2025-05-09T00:39:28.645547808Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:39:28.658717 containerd[1455]: time="2025-05-09T00:39:28.658669845Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72\"" May 9 00:39:28.659071 containerd[1455]: time="2025-05-09T00:39:28.659047498Z" level=info msg="StartContainer for \"dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72\"" May 9 00:39:28.687469 systemd[1]: Started cri-containerd-dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72.scope - libcontainer container dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72. May 9 00:39:28.710866 containerd[1455]: time="2025-05-09T00:39:28.710830798Z" level=info msg="StartContainer for \"dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72\" returns successfully" May 9 00:39:28.721489 systemd[1]: cri-containerd-dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72.scope: Deactivated successfully. May 9 00:39:28.750947 containerd[1455]: time="2025-05-09T00:39:28.750866794Z" level=info msg="shim disconnected" id=dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72 namespace=k8s.io May 9 00:39:28.750947 containerd[1455]: time="2025-05-09T00:39:28.750936316Z" level=warning msg="cleaning up after shim disconnected" id=dfb5a531a0b098813efe1e9ca867498cd20aa3dc5056a0b35c953c8f927ade72 namespace=k8s.io May 9 00:39:28.750947 containerd[1455]: time="2025-05-09T00:39:28.750945294Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:29.088978 kubelet[2470]: E0509 00:39:29.088942 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:29.090626 containerd[1455]: time="2025-05-09T00:39:29.090592779Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:39:29.102325 containerd[1455]: time="2025-05-09T00:39:29.102274137Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47\"" May 9 00:39:29.103027 containerd[1455]: time="2025-05-09T00:39:29.102798741Z" level=info msg="StartContainer for \"a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47\"" May 9 00:39:29.132473 systemd[1]: Started cri-containerd-a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47.scope - libcontainer container a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47. May 9 00:39:29.158708 containerd[1455]: time="2025-05-09T00:39:29.158663359Z" level=info msg="StartContainer for \"a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47\" returns successfully" May 9 00:39:29.165276 systemd[1]: cri-containerd-a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47.scope: Deactivated successfully. May 9 00:39:29.188007 containerd[1455]: time="2025-05-09T00:39:29.187937876Z" level=info msg="shim disconnected" id=a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47 namespace=k8s.io May 9 00:39:29.188007 containerd[1455]: time="2025-05-09T00:39:29.187995215Z" level=warning msg="cleaning up after shim disconnected" id=a2b4f47c74701dd6d057888f64f52ff8be54a4de5fd68945bb54a7868393af47 namespace=k8s.io May 9 00:39:29.188007 containerd[1455]: time="2025-05-09T00:39:29.188005466Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:30.095790 kubelet[2470]: E0509 00:39:30.095227 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:30.097194 containerd[1455]: time="2025-05-09T00:39:30.097147811Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:39:30.111524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612059865.mount: Deactivated successfully. May 9 00:39:30.112507 containerd[1455]: time="2025-05-09T00:39:30.112466604Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7\"" May 9 00:39:30.113049 containerd[1455]: time="2025-05-09T00:39:30.112995184Z" level=info msg="StartContainer for \"0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7\"" May 9 00:39:30.143461 systemd[1]: Started cri-containerd-0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7.scope - libcontainer container 0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7. May 9 00:39:30.172022 containerd[1455]: time="2025-05-09T00:39:30.171973272Z" level=info msg="StartContainer for \"0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7\" returns successfully" May 9 00:39:30.174943 systemd[1]: cri-containerd-0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7.scope: Deactivated successfully. May 9 00:39:30.203772 containerd[1455]: time="2025-05-09T00:39:30.203700671Z" level=info msg="shim disconnected" id=0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7 namespace=k8s.io May 9 00:39:30.203772 containerd[1455]: time="2025-05-09T00:39:30.203756709Z" level=warning msg="cleaning up after shim disconnected" id=0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7 namespace=k8s.io May 9 00:39:30.203772 containerd[1455]: time="2025-05-09T00:39:30.203764885Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:30.462182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0145e30528aa1b6632c163d1e5d23eb2422cb5b2a79d0b27a620362758b57ce7-rootfs.mount: Deactivated successfully. May 9 00:39:30.658764 kubelet[2470]: E0509 00:39:30.658703 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:30.720782 kubelet[2470]: E0509 00:39:30.720646 2470 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:39:31.102994 kubelet[2470]: E0509 00:39:31.102418 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:31.104580 containerd[1455]: time="2025-05-09T00:39:31.104180943Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:39:31.118982 containerd[1455]: time="2025-05-09T00:39:31.118587939Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e\"" May 9 00:39:31.119599 containerd[1455]: time="2025-05-09T00:39:31.119567752Z" level=info msg="StartContainer for \"6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e\"" May 9 00:39:31.152524 systemd[1]: Started cri-containerd-6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e.scope - libcontainer container 6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e. May 9 00:39:31.177257 systemd[1]: cri-containerd-6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e.scope: Deactivated successfully. May 9 00:39:31.179764 containerd[1455]: time="2025-05-09T00:39:31.179731462Z" level=info msg="StartContainer for \"6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e\" returns successfully" May 9 00:39:31.204040 containerd[1455]: time="2025-05-09T00:39:31.203974096Z" level=info msg="shim disconnected" id=6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e namespace=k8s.io May 9 00:39:31.204040 containerd[1455]: time="2025-05-09T00:39:31.204038109Z" level=warning msg="cleaning up after shim disconnected" id=6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e namespace=k8s.io May 9 00:39:31.204258 containerd[1455]: time="2025-05-09T00:39:31.204047307Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:39:31.462168 systemd[1]: run-containerd-runc-k8s.io-6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e-runc.A2SrFg.mount: Deactivated successfully. May 9 00:39:31.462283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fde7cc61168d28bacb3715891216c1de1364daeba7d7d2a25ac7fa6b0e65f6e-rootfs.mount: Deactivated successfully. May 9 00:39:32.106005 kubelet[2470]: E0509 00:39:32.105961 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:32.107978 containerd[1455]: time="2025-05-09T00:39:32.107911074Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:39:32.122836 containerd[1455]: time="2025-05-09T00:39:32.122746545Z" level=info msg="CreateContainer within sandbox \"0a3809c1ea1286eeaf9e7041921d9225876fa030962e03e802c97413456e8bd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"513125860cbabc166e03de6dc4deddd58809a31b4d29a07da173c9d0a7f69fcd\"" May 9 00:39:32.124273 containerd[1455]: time="2025-05-09T00:39:32.124230310Z" level=info msg="StartContainer for \"513125860cbabc166e03de6dc4deddd58809a31b4d29a07da173c9d0a7f69fcd\"" May 9 00:39:32.159501 systemd[1]: Started cri-containerd-513125860cbabc166e03de6dc4deddd58809a31b4d29a07da173c9d0a7f69fcd.scope - libcontainer container 513125860cbabc166e03de6dc4deddd58809a31b4d29a07da173c9d0a7f69fcd. May 9 00:39:32.188689 containerd[1455]: time="2025-05-09T00:39:32.188644326Z" level=info msg="StartContainer for \"513125860cbabc166e03de6dc4deddd58809a31b4d29a07da173c9d0a7f69fcd\" returns successfully" May 9 00:39:32.594366 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:39:33.112855 kubelet[2470]: E0509 00:39:33.112820 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:33.126645 kubelet[2470]: I0509 00:39:33.126584 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hszrp" podStartSLOduration=5.126564442 podStartE2EDuration="5.126564442s" podCreationTimestamp="2025-05-09 00:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:39:33.126215115 +0000 UTC m=+87.574077251" watchObservedRunningTime="2025-05-09 00:39:33.126564442 +0000 UTC m=+87.574426568" May 9 00:39:34.583971 kubelet[2470]: E0509 00:39:34.583874 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:35.741572 systemd-networkd[1389]: lxc_health: Link UP May 9 00:39:35.742874 systemd-networkd[1389]: lxc_health: Gained carrier May 9 00:39:36.585088 kubelet[2470]: E0509 00:39:36.584694 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:37.120768 kubelet[2470]: E0509 00:39:37.120735 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:37.469415 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 9 00:39:38.122173 kubelet[2470]: E0509 00:39:38.122141 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:43.414677 sshd[4317]: pam_unix(sshd:session): session closed for user core May 9 00:39:43.418870 systemd[1]: sshd@27-10.0.0.142:22-10.0.0.1:52422.service: Deactivated successfully. May 9 00:39:43.421018 systemd[1]: session-28.scope: Deactivated successfully. May 9 00:39:43.421801 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. May 9 00:39:43.422676 systemd-logind[1442]: Removed session 28. May 9 00:39:43.658351 kubelet[2470]: E0509 00:39:43.658292 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:43.658815 kubelet[2470]: E0509 00:39:43.658544 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"