May 14 00:01:55.965739 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 00:01:55.965769 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:55.965785 kernel: BIOS-provided physical RAM map: May 14 00:01:55.965793 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 00:01:55.965802 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 00:01:55.965810 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 00:01:55.965821 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 14 00:01:55.965830 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 14 00:01:55.965839 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 00:01:55.965848 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 00:01:55.965860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:01:55.965868 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 00:01:55.965877 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:01:55.965886 kernel: NX (Execute Disable) protection: active May 14 00:01:55.965897 kernel: APIC: Static calls initialized May 14 00:01:55.965910 kernel: SMBIOS 2.8 present. May 14 00:01:55.965920 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 14 00:01:55.965929 kernel: Hypervisor detected: KVM May 14 00:01:55.965939 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 00:01:55.965949 kernel: kvm-clock: using sched offset of 2778521611 cycles May 14 00:01:55.965959 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 00:01:55.965969 kernel: tsc: Detected 2794.748 MHz processor May 14 00:01:55.965990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 00:01:55.966001 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 00:01:55.966011 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 14 00:01:55.966024 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 00:01:55.966035 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 00:01:55.966044 kernel: Using GB pages for direct mapping May 14 00:01:55.966054 kernel: ACPI: Early table checksum verification disabled May 14 00:01:55.966062 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 14 00:01:55.966069 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966077 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966087 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966096 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 14 00:01:55.966107 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966116 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966126 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966136 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:55.966144 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 14 00:01:55.966151 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 14 00:01:55.966162 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 14 00:01:55.966172 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 14 00:01:55.966180 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 14 00:01:55.966190 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 14 00:01:55.966197 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 14 00:01:55.966206 kernel: No NUMA configuration found May 14 00:01:55.966215 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 14 00:01:55.966223 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 14 00:01:55.966236 kernel: Zone ranges: May 14 00:01:55.966246 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 00:01:55.966256 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 14 00:01:55.966266 kernel: Normal empty May 14 00:01:55.966275 kernel: Movable zone start for each node May 14 00:01:55.966284 kernel: Early memory node ranges May 14 00:01:55.966293 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 00:01:55.966302 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 14 00:01:55.966312 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 14 00:01:55.966324 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:01:55.966333 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 00:01:55.966343 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 00:01:55.966352 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 00:01:55.966361 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 00:01:55.966371 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 00:01:55.966380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 00:01:55.966390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 00:01:55.966399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 00:01:55.966411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 00:01:55.966420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 00:01:55.966429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 00:01:55.966438 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 00:01:55.966447 kernel: TSC deadline timer available May 14 00:01:55.966457 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 00:01:55.966466 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 00:01:55.966475 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 00:01:55.966484 kernel: kvm-guest: setup PV sched yield May 14 00:01:55.966494 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 00:01:55.966505 kernel: Booting paravirtualized kernel on KVM May 14 00:01:55.966515 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 00:01:55.966524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 00:01:55.966534 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 00:01:55.966543 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 00:01:55.966552 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 00:01:55.966561 kernel: kvm-guest: PV spinlocks enabled May 14 00:01:55.966570 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 00:01:55.966581 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:55.966594 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:01:55.966603 kernel: random: crng init done May 14 00:01:55.966612 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:01:55.966622 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:01:55.966631 kernel: Fallback order for Node 0: 0 May 14 00:01:55.966640 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 14 00:01:55.966649 kernel: Policy zone: DMA32 May 14 00:01:55.966659 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:01:55.966671 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 140996K reserved, 0K cma-reserved) May 14 00:01:55.966710 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:01:55.966728 kernel: ftrace: allocating 37993 entries in 149 pages May 14 00:01:55.966738 kernel: ftrace: allocated 149 pages with 4 groups May 14 00:01:55.966747 kernel: Dynamic Preempt: voluntary May 14 00:01:55.966756 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:01:55.966766 kernel: rcu: RCU event tracing is enabled. May 14 00:01:55.966776 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:01:55.966785 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:01:55.966799 kernel: Rude variant of Tasks RCU enabled. May 14 00:01:55.966809 kernel: Tracing variant of Tasks RCU enabled. May 14 00:01:55.966819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:01:55.966829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:01:55.966840 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 00:01:55.966850 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:01:55.966861 kernel: Console: colour VGA+ 80x25 May 14 00:01:55.966871 kernel: printk: console [ttyS0] enabled May 14 00:01:55.966881 kernel: ACPI: Core revision 20230628 May 14 00:01:55.966895 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 00:01:55.966905 kernel: APIC: Switch to symmetric I/O mode setup May 14 00:01:55.966916 kernel: x2apic enabled May 14 00:01:55.966926 kernel: APIC: Switched APIC routing to: physical x2apic May 14 00:01:55.966936 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 00:01:55.966946 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 00:01:55.966955 kernel: kvm-guest: setup PV IPIs May 14 00:01:55.966988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 00:01:55.966997 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 00:01:55.967007 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 00:01:55.967018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 00:01:55.967028 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 00:01:55.967042 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 00:01:55.967052 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 00:01:55.967063 kernel: Spectre V2 : Mitigation: Retpolines May 14 00:01:55.967074 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 00:01:55.967096 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 00:01:55.967111 kernel: RETBleed: Mitigation: untrained return thunk May 14 00:01:55.967121 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 00:01:55.967137 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 00:01:55.967155 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 00:01:55.967173 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 00:01:55.967183 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 00:01:55.967205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 00:01:55.967217 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 00:01:55.967230 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 00:01:55.967241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 00:01:55.967252 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 00:01:55.967263 kernel: Freeing SMP alternatives memory: 32K May 14 00:01:55.967273 kernel: pid_max: default: 32768 minimum: 301 May 14 00:01:55.967284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:01:55.967294 kernel: landlock: Up and running. May 14 00:01:55.967304 kernel: SELinux: Initializing. May 14 00:01:55.967315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:01:55.967329 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:01:55.967339 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 00:01:55.967349 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:55.967359 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:55.967370 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:55.967380 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 00:01:55.967391 kernel: ... version: 0 May 14 00:01:55.967401 kernel: ... bit width: 48 May 14 00:01:55.967416 kernel: ... generic registers: 6 May 14 00:01:55.967426 kernel: ... value mask: 0000ffffffffffff May 14 00:01:55.967437 kernel: ... max period: 00007fffffffffff May 14 00:01:55.967447 kernel: ... fixed-purpose events: 0 May 14 00:01:55.967457 kernel: ... event mask: 000000000000003f May 14 00:01:55.967466 kernel: signal: max sigframe size: 1776 May 14 00:01:55.967477 kernel: rcu: Hierarchical SRCU implementation. May 14 00:01:55.967487 kernel: rcu: Max phase no-delay instances is 400. May 14 00:01:55.967497 kernel: smp: Bringing up secondary CPUs ... May 14 00:01:55.967507 kernel: smpboot: x86: Booting SMP configuration: May 14 00:01:55.967521 kernel: .... node #0, CPUs: #1 #2 #3 May 14 00:01:55.967531 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:01:55.967541 kernel: smpboot: Max logical packages: 1 May 14 00:01:55.967551 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 00:01:55.967562 kernel: devtmpfs: initialized May 14 00:01:55.967572 kernel: x86/mm: Memory block size: 128MB May 14 00:01:55.967583 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:01:55.967593 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:01:55.967604 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:01:55.967618 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:01:55.967632 kernel: audit: initializing netlink subsys (disabled) May 14 00:01:55.967642 kernel: audit: type=2000 audit(1747180915.795:1): state=initialized audit_enabled=0 res=1 May 14 00:01:55.967653 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:01:55.967663 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 00:01:55.967673 kernel: cpuidle: using governor menu May 14 00:01:55.967698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:01:55.967709 kernel: dca service started, version 1.12.1 May 14 00:01:55.967719 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 00:01:55.967733 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 14 00:01:55.967743 kernel: PCI: Using configuration type 1 for base access May 14 00:01:55.967753 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 00:01:55.967764 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:01:55.967774 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:01:55.967785 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:01:55.967795 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:01:55.967806 kernel: ACPI: Added _OSI(Module Device) May 14 00:01:55.967816 kernel: ACPI: Added _OSI(Processor Device) May 14 00:01:55.967831 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:01:55.967841 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:01:55.967852 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:01:55.967862 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 00:01:55.967872 kernel: ACPI: Interpreter enabled May 14 00:01:55.967883 kernel: ACPI: PM: (supports S0 S3 S5) May 14 00:01:55.967893 kernel: ACPI: Using IOAPIC for interrupt routing May 14 00:01:55.967904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 00:01:55.967914 kernel: PCI: Using E820 reservations for host bridge windows May 14 00:01:55.967929 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 00:01:55.967939 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:01:55.968205 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:01:55.968378 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 00:01:55.968544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 00:01:55.968560 kernel: PCI host bridge to bus 0000:00 May 14 00:01:55.968779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 00:01:55.968940 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 00:01:55.969100 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 00:01:55.969246 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 14 00:01:55.969390 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 00:01:55.969532 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 00:01:55.969675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:01:55.969861 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 00:01:55.970035 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 00:01:55.970179 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 14 00:01:55.970317 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 14 00:01:55.970455 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 14 00:01:55.970590 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 00:01:55.970771 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:01:55.970932 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 14 00:01:55.971111 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 14 00:01:55.971276 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 14 00:01:55.971502 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 00:01:55.971811 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 14 00:01:55.972117 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 14 00:01:55.972384 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 14 00:01:55.972591 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 00:01:55.972792 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 14 00:01:55.972954 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 14 00:01:55.973124 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 14 00:01:55.973287 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 14 00:01:55.973469 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 00:01:55.973634 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 00:01:55.973849 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 00:01:55.974025 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 14 00:01:55.974184 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 14 00:01:55.974372 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 00:01:55.974570 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 00:01:55.974585 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 00:01:55.974596 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 00:01:55.974616 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 00:01:55.974633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 00:01:55.974643 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 00:01:55.974653 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 00:01:55.974663 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 00:01:55.974672 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 00:01:55.974682 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 00:01:55.974853 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 00:01:55.974862 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 00:01:55.974875 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 00:01:55.974885 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 00:01:55.974895 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 00:01:55.974904 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 00:01:55.974914 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 00:01:55.974924 kernel: iommu: Default domain type: Translated May 14 00:01:55.974934 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 00:01:55.974943 kernel: PCI: Using ACPI for IRQ routing May 14 00:01:55.974953 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 00:01:55.974966 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 00:01:55.974976 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 14 00:01:55.975133 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 00:01:55.975271 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 00:01:55.975409 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 00:01:55.975421 kernel: vgaarb: loaded May 14 00:01:55.975431 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 00:01:55.975441 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 00:01:55.975455 kernel: clocksource: Switched to clocksource kvm-clock May 14 00:01:55.975464 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:01:55.975475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:01:55.975484 kernel: pnp: PnP ACPI init May 14 00:01:55.975652 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 00:01:55.975666 kernel: pnp: PnP ACPI: found 6 devices May 14 00:01:55.975676 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 00:01:55.975698 kernel: NET: Registered PF_INET protocol family May 14 00:01:55.975711 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:01:55.975719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:01:55.975727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:01:55.975734 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:01:55.975742 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 00:01:55.975750 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:01:55.975758 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:01:55.975765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:01:55.975773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:01:55.975783 kernel: NET: Registered PF_XDP protocol family May 14 00:01:55.975902 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 00:01:55.976023 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 00:01:55.976137 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 00:01:55.976280 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 14 00:01:55.976425 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 00:01:55.976572 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 00:01:55.976588 kernel: PCI: CLS 0 bytes, default 64 May 14 00:01:55.976603 kernel: Initialise system trusted keyrings May 14 00:01:55.976614 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:01:55.976625 kernel: Key type asymmetric registered May 14 00:01:55.976636 kernel: Asymmetric key parser 'x509' registered May 14 00:01:55.976646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 00:01:55.976657 kernel: io scheduler mq-deadline registered May 14 00:01:55.976668 kernel: io scheduler kyber registered May 14 00:01:55.976678 kernel: io scheduler bfq registered May 14 00:01:55.976777 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 00:01:55.976795 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 00:01:55.976807 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 00:01:55.976818 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 00:01:55.976830 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:01:55.976842 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 00:01:55.976853 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 00:01:55.976865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 00:01:55.976876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 00:01:55.977055 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 00:01:55.977212 kernel: rtc_cmos 00:04: registered as rtc0 May 14 00:01:55.977229 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 00:01:55.977378 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T00:01:55 UTC (1747180915) May 14 00:01:55.977524 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 00:01:55.977539 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 00:01:55.977551 kernel: NET: Registered PF_INET6 protocol family May 14 00:01:55.977562 kernel: Segment Routing with IPv6 May 14 00:01:55.977573 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:01:55.977589 kernel: NET: Registered PF_PACKET protocol family May 14 00:01:55.977600 kernel: Key type dns_resolver registered May 14 00:01:55.977611 kernel: IPI shorthand broadcast: enabled May 14 00:01:55.977623 kernel: sched_clock: Marking stable (673003944, 174864135)->(968134774, -120266695) May 14 00:01:55.977634 kernel: registered taskstats version 1 May 14 00:01:55.977645 kernel: Loading compiled-in X.509 certificates May 14 00:01:55.977657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 00:01:55.977668 kernel: Key type .fscrypt registered May 14 00:01:55.977679 kernel: Key type fscrypt-provisioning registered May 14 00:01:55.977709 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:01:55.977720 kernel: ima: Allocated hash algorithm: sha1 May 14 00:01:55.977732 kernel: ima: No architecture policies found May 14 00:01:55.977742 kernel: clk: Disabling unused clocks May 14 00:01:55.977753 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 00:01:55.977764 kernel: Write protecting the kernel read-only data: 40960k May 14 00:01:55.977775 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 00:01:55.977787 kernel: Run /init as init process May 14 00:01:55.977798 kernel: with arguments: May 14 00:01:55.977812 kernel: /init May 14 00:01:55.977823 kernel: with environment: May 14 00:01:55.977835 kernel: HOME=/ May 14 00:01:55.977846 kernel: TERM=linux May 14 00:01:55.977857 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:01:55.977870 systemd[1]: Successfully made /usr/ read-only. May 14 00:01:55.977885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:01:55.977899 systemd[1]: Detected virtualization kvm. May 14 00:01:55.977913 systemd[1]: Detected architecture x86-64. May 14 00:01:55.977925 systemd[1]: Running in initrd. May 14 00:01:55.977937 systemd[1]: No hostname configured, using default hostname. May 14 00:01:55.977950 systemd[1]: Hostname set to . May 14 00:01:55.977962 systemd[1]: Initializing machine ID from VM UUID. May 14 00:01:55.977974 systemd[1]: Queued start job for default target initrd.target. May 14 00:01:55.977994 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:01:55.978004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:01:55.978019 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:01:55.978043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:01:55.978058 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:01:55.978071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:01:55.978087 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:01:55.978099 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:01:55.978112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:01:55.978124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:01:55.978137 systemd[1]: Reached target paths.target - Path Units. May 14 00:01:55.978150 systemd[1]: Reached target slices.target - Slice Units. May 14 00:01:55.978162 systemd[1]: Reached target swap.target - Swaps. May 14 00:01:55.978175 systemd[1]: Reached target timers.target - Timer Units. May 14 00:01:55.978187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:01:55.978202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:01:55.978213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:01:55.978225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:01:55.978236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:01:55.978248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:01:55.978259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:01:55.978270 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:01:55.978285 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:01:55.978296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:01:55.978311 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:01:55.978323 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:01:55.978336 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:01:55.978349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:01:55.978361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:55.978374 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:01:55.978387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:01:55.978403 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:01:55.978416 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:01:55.978459 systemd-journald[194]: Collecting audit messages is disabled. May 14 00:01:55.978492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:01:55.978505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:01:55.978517 kernel: Bridge firewalling registered May 14 00:01:55.978532 systemd-journald[194]: Journal started May 14 00:01:55.978559 systemd-journald[194]: Runtime Journal (/run/log/journal/798944795fc54ccfa8843c68f459753d) is 6M, max 48.3M, 42.3M free. May 14 00:01:55.924576 systemd-modules-load[196]: Inserted module 'overlay' May 14 00:01:55.993656 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:01:55.974642 systemd-modules-load[196]: Inserted module 'br_netfilter' May 14 00:01:55.993887 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:01:55.996358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:55.999464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:01:56.002575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:01:56.008270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:01:56.010192 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:01:56.020216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:01:56.023774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:01:56.026085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:01:56.028440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:01:56.030758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:56.039928 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:01:56.065545 dracut-cmdline[231]: dracut-dracut-053 May 14 00:01:56.067994 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:56.073765 systemd-resolved[230]: Positive Trust Anchors: May 14 00:01:56.073781 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:01:56.073812 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:01:56.076402 systemd-resolved[230]: Defaulting to hostname 'linux'. May 14 00:01:56.077549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:01:56.083670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:01:56.147714 kernel: SCSI subsystem initialized May 14 00:01:56.158721 kernel: Loading iSCSI transport class v2.0-870. May 14 00:01:56.169718 kernel: iscsi: registered transport (tcp) May 14 00:01:56.190705 kernel: iscsi: registered transport (qla4xxx) May 14 00:01:56.190740 kernel: QLogic iSCSI HBA Driver May 14 00:01:56.240458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:01:56.241950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:01:56.281728 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:01:56.281816 kernel: device-mapper: uevent: version 1.0.3 May 14 00:01:56.281844 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:01:56.328745 kernel: raid6: avx2x4 gen() 21600 MB/s May 14 00:01:56.345727 kernel: raid6: avx2x2 gen() 21002 MB/s May 14 00:01:56.367208 kernel: raid6: avx2x1 gen() 17564 MB/s May 14 00:01:56.367285 kernel: raid6: using algorithm avx2x4 gen() 21600 MB/s May 14 00:01:56.384874 kernel: raid6: .... xor() 5772 MB/s, rmw enabled May 14 00:01:56.384948 kernel: raid6: using avx2x2 recovery algorithm May 14 00:01:56.406712 kernel: xor: automatically using best checksumming function avx May 14 00:01:56.560740 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:01:56.575737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:01:56.578997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:01:56.607602 systemd-udevd[416]: Using default interface naming scheme 'v255'. May 14 00:01:56.613212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:01:56.619524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:01:56.650918 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation May 14 00:01:56.686517 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:01:56.690175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:01:56.772856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:01:56.778414 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:01:56.805304 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:01:56.814157 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:01:56.817546 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 00:01:56.820462 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:01:56.820610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:01:56.825874 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:01:56.822405 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:01:56.824818 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:01:56.836439 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:01:56.836495 kernel: GPT:9289727 != 19775487 May 14 00:01:56.836521 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:01:56.836538 kernel: GPT:9289727 != 19775487 May 14 00:01:56.836920 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:01:56.844266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:01:56.846103 kernel: AVX2 version of gcm_enc/dec engaged. May 14 00:01:56.846130 kernel: AES CTR mode by8 optimization enabled May 14 00:01:56.848825 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:01:56.860711 kernel: libata version 3.00 loaded. May 14 00:01:56.868747 kernel: ahci 0000:00:1f.2: version 3.0 May 14 00:01:56.869357 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 00:01:56.872261 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 00:01:56.872468 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 00:01:56.872249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:01:56.879584 kernel: scsi host0: ahci May 14 00:01:56.880307 kernel: scsi host1: ahci May 14 00:01:56.880464 kernel: scsi host2: ahci May 14 00:01:56.872849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:56.873550 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:01:56.884860 kernel: scsi host3: ahci May 14 00:01:56.873940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:01:56.874115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:56.903757 kernel: scsi host4: ahci May 14 00:01:56.904025 kernel: scsi host5: ahci May 14 00:01:56.904242 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 14 00:01:56.904261 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 14 00:01:56.904279 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) May 14 00:01:56.904299 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 14 00:01:56.904325 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 14 00:01:56.904344 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 14 00:01:56.904363 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 14 00:01:56.904382 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (477) May 14 00:01:56.879639 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:56.884945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:56.910762 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 00:01:56.935594 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 00:01:56.966585 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 00:01:56.966713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 00:01:56.972148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:56.985155 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:01:56.986314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:01:56.990746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:01:57.018160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:57.216144 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 00:01:57.216243 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 00:01:57.216258 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 00:01:57.216272 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 00:01:57.217711 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 00:01:57.218709 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 00:01:57.218729 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 00:01:57.220039 kernel: ata3.00: applying bridge limits May 14 00:01:57.220741 kernel: ata3.00: configured for UDMA/100 May 14 00:01:57.221715 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 00:01:57.346149 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 00:01:57.346631 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 00:01:57.358725 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 00:01:57.573309 disk-uuid[563]: Primary Header is updated. May 14 00:01:57.573309 disk-uuid[563]: Secondary Entries is updated. May 14 00:01:57.573309 disk-uuid[563]: Secondary Header is updated. May 14 00:01:57.576815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:01:57.581702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:01:58.590474 disk-uuid[584]: The operation has completed successfully. May 14 00:01:58.592256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:01:58.627618 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:01:58.627802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:01:58.674444 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:01:58.705733 sh[595]: Success May 14 00:01:58.718744 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 00:01:58.758429 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:01:58.795614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:01:58.814677 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:01:58.886508 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 00:01:58.886564 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 00:01:58.886576 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:01:58.888909 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:01:58.888932 kernel: BTRFS info (device dm-0): using free space tree May 14 00:01:58.894751 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:01:58.896619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:01:58.897750 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:01:58.901358 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:01:58.946327 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:01:58.946386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:01:58.946398 kernel: BTRFS info (device vda6): using free space tree May 14 00:01:58.952706 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:01:58.957724 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:01:59.015951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:01:59.048228 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:01:59.097112 systemd-networkd[771]: lo: Link UP May 14 00:01:59.097123 systemd-networkd[771]: lo: Gained carrier May 14 00:01:59.108094 systemd-networkd[771]: Enumeration completed May 14 00:01:59.108488 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:01:59.108493 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:01:59.109372 systemd-networkd[771]: eth0: Link UP May 14 00:01:59.109376 systemd-networkd[771]: eth0: Gained carrier May 14 00:01:59.109384 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:01:59.109481 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:01:59.171210 systemd[1]: Reached target network.target - Network. May 14 00:01:59.188735 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:01:59.228731 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:01:59.238502 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:01:59.338044 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.109 May 14 00:01:59.338064 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 14 00:01:59.429729 ignition[776]: Ignition 2.20.0 May 14 00:01:59.429746 ignition[776]: Stage: fetch-offline May 14 00:01:59.429812 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 14 00:01:59.429826 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:01:59.430005 ignition[776]: parsed url from cmdline: "" May 14 00:01:59.430010 ignition[776]: no config URL provided May 14 00:01:59.430017 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:01:59.430029 ignition[776]: no config at "/usr/lib/ignition/user.ign" May 14 00:01:59.430067 ignition[776]: op(1): [started] loading QEMU firmware config module May 14 00:01:59.430074 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:01:59.438146 ignition[776]: op(1): [finished] loading QEMU firmware config module May 14 00:01:59.482458 ignition[776]: parsing config with SHA512: 6ad0aa7567541b900ae81353429ab03af3fdb58e2f4b5eee9e57276d791dee3398e11eb4b6a28e57c91866e8644039a6b47a190193d86a25ca635cdd2a569868 May 14 00:01:59.486652 unknown[776]: fetched base config from "system" May 14 00:01:59.486662 unknown[776]: fetched user config from "qemu" May 14 00:01:59.489142 ignition[776]: fetch-offline: fetch-offline passed May 14 00:01:59.489298 ignition[776]: Ignition finished successfully May 14 00:01:59.491662 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:01:59.494273 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:01:59.496964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:01:59.526451 ignition[787]: Ignition 2.20.0 May 14 00:01:59.526465 ignition[787]: Stage: kargs May 14 00:01:59.526666 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 14 00:01:59.526681 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:01:59.527790 ignition[787]: kargs: kargs passed May 14 00:01:59.527849 ignition[787]: Ignition finished successfully May 14 00:01:59.571301 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:01:59.574660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:01:59.605894 ignition[796]: Ignition 2.20.0 May 14 00:01:59.605911 ignition[796]: Stage: disks May 14 00:01:59.606078 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 14 00:01:59.606090 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:01:59.606900 ignition[796]: disks: disks passed May 14 00:01:59.606946 ignition[796]: Ignition finished successfully May 14 00:01:59.612785 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:01:59.614979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:01:59.615060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:01:59.617171 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:01:59.619511 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:01:59.621414 systemd[1]: Reached target basic.target - Basic System. May 14 00:01:59.625145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:01:59.663825 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:01:59.870220 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:01:59.872663 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:01:59.994732 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 00:01:59.995448 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:01:59.996121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:02:00.000119 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:02:00.002755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:02:00.003185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 00:02:00.003249 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:02:00.003288 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:02:00.019152 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:02:00.022711 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) May 14 00:02:00.023033 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:02:00.026919 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:00.026942 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:00.026953 kernel: BTRFS info (device vda6): using free space tree May 14 00:02:00.026963 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:02:00.035802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:02:00.068060 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:02:00.071974 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 14 00:02:00.076097 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:02:00.080971 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:02:00.165547 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:02:00.167273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:02:00.170934 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:02:00.188098 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:02:00.189580 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:00.200147 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:02:00.228918 ignition[932]: INFO : Ignition 2.20.0 May 14 00:02:00.228918 ignition[932]: INFO : Stage: mount May 14 00:02:00.230776 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:00.230776 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:00.230776 ignition[932]: INFO : mount: mount passed May 14 00:02:00.230776 ignition[932]: INFO : Ignition finished successfully May 14 00:02:00.232340 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:02:00.235074 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:02:00.869914 systemd-networkd[771]: eth0: Gained IPv6LL May 14 00:02:00.997635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:02:01.038721 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) May 14 00:02:01.040994 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:01.041017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:01.041028 kernel: BTRFS info (device vda6): using free space tree May 14 00:02:01.044710 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:02:01.045753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:02:01.073309 ignition[959]: INFO : Ignition 2.20.0 May 14 00:02:01.073309 ignition[959]: INFO : Stage: files May 14 00:02:01.075429 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:01.075429 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:01.075429 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 14 00:02:01.079492 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:02:01.079492 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:02:01.079492 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:02:01.079492 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:02:01.085578 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:02:01.085578 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:02:01.085578 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 00:02:01.079560 unknown[959]: wrote ssh authorized keys file for user: core May 14 00:02:01.146017 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:02:01.346732 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:02:01.360734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:02:01.360734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 00:02:01.860275 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:02:01.950345 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:02:01.950345 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:02:01.978031 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 00:02:02.380159 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:02:02.803214 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:02:02.803214 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:02:02.806921 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:02:02.843612 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:02:02.848093 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:02:02.869533 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:02:02.869533 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 00:02:02.869533 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:02:02.869533 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:02:02.869533 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:02:02.869533 ignition[959]: INFO : files: files passed May 14 00:02:02.869533 ignition[959]: INFO : Ignition finished successfully May 14 00:02:02.851295 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:02:02.870531 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:02:02.872723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:02:02.884549 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:02:02.884658 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:02:02.888238 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 14 00:02:02.889721 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:02:02.889721 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:02:02.892943 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:02:02.892397 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:02:02.894438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:02:02.897590 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:02:02.971340 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:02:02.971478 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:02:02.994310 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:02:02.996324 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:02:02.998402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:02:02.999307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:02:03.053276 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:02:03.054707 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:02:03.096505 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:02:03.098947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:02:03.100257 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:02:03.102272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:02:03.102403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:02:03.104764 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:02:03.106325 systemd[1]: Stopped target basic.target - Basic System. May 14 00:02:03.108415 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:02:03.140748 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:02:03.142630 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:02:03.144851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:02:03.146998 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:02:03.149386 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:02:03.151469 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:02:03.153706 systemd[1]: Stopped target swap.target - Swaps. May 14 00:02:03.155496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:02:03.155709 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:02:03.157996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:02:03.159462 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:02:03.161565 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:02:03.161706 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:02:03.193379 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:02:03.193531 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:02:03.195880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:02:03.195994 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:02:03.197860 systemd[1]: Stopped target paths.target - Path Units. May 14 00:02:03.199634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:02:03.205808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:02:03.207504 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:02:03.209426 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:02:03.211808 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:02:03.211910 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:02:03.237344 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:02:03.237428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:02:03.239296 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:02:03.239420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:02:03.241345 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:02:03.241456 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:02:03.244135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:02:03.246605 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:02:03.247897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:02:03.248015 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:02:03.270621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:02:03.270746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:02:03.279948 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:02:03.280050 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:02:03.290869 ignition[1015]: INFO : Ignition 2.20.0 May 14 00:02:03.290869 ignition[1015]: INFO : Stage: umount May 14 00:02:03.292894 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:03.292894 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:03.292894 ignition[1015]: INFO : umount: umount passed May 14 00:02:03.292894 ignition[1015]: INFO : Ignition finished successfully May 14 00:02:03.294348 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:02:03.294485 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:02:03.297067 systemd[1]: Stopped target network.target - Network. May 14 00:02:03.298626 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:02:03.298681 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:02:03.300533 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:02:03.300585 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:02:03.302387 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:02:03.302439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:02:03.303224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:02:03.303268 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:02:03.303653 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:02:03.310602 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:02:03.328411 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:02:03.328549 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:02:03.332333 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:02:03.332547 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:02:03.332657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:02:03.337295 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:02:03.337960 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:02:03.338017 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:02:03.341047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:02:03.342063 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:02:03.342116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:02:03.344383 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:02:03.344433 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:03.347309 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:02:03.347357 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:02:03.372386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:02:03.372457 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:02:03.375484 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:02:03.380349 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:02:03.380421 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:02:03.411876 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:02:03.412049 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:02:03.414519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:02:03.414571 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:02:03.416655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:02:03.416722 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:02:03.437755 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:02:03.437814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:02:03.441799 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:02:03.441848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:02:03.443941 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:02:03.443988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:02:03.449098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:02:03.449772 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:02:03.449823 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:02:03.454650 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 00:02:03.454712 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:02:03.456227 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:02:03.456275 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:02:03.459682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:02:03.459743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:02:03.465192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:02:03.465259 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:02:03.469930 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:02:03.470039 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:02:03.475080 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:02:03.478787 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:02:03.479775 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:02:03.481863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:02:03.482981 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:02:03.486124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:02:03.488374 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:02:03.489417 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:02:03.492422 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:02:03.511334 systemd[1]: Switching root. May 14 00:02:03.577179 systemd-journald[194]: Journal stopped May 14 00:02:07.273989 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 14 00:02:07.274070 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:02:07.274087 kernel: SELinux: policy capability open_perms=1 May 14 00:02:07.274101 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:02:07.274126 kernel: SELinux: policy capability always_check_network=0 May 14 00:02:07.274140 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:02:07.274155 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:02:07.274169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:02:07.274183 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:02:07.274204 kernel: audit: type=1403 audit(1747180925.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:02:07.274225 systemd[1]: Successfully loaded SELinux policy in 71.714ms. May 14 00:02:07.274250 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.405ms. May 14 00:02:07.274267 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:02:07.274288 systemd[1]: Detected virtualization kvm. May 14 00:02:07.274304 systemd[1]: Detected architecture x86-64. May 14 00:02:07.274319 systemd[1]: Detected first boot. May 14 00:02:07.274334 systemd[1]: Initializing machine ID from VM UUID. May 14 00:02:07.274349 zram_generator::config[1062]: No configuration found. May 14 00:02:07.274364 kernel: Guest personality initialized and is inactive May 14 00:02:07.274379 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 00:02:07.274392 kernel: Initialized host personality May 14 00:02:07.274410 kernel: NET: Registered PF_VSOCK protocol family May 14 00:02:07.274424 systemd[1]: Populated /etc with preset unit settings. May 14 00:02:07.274441 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:02:07.274456 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:02:07.274470 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:02:07.274485 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:02:07.274500 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:02:07.274516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:02:07.274531 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:02:07.274549 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:02:07.274565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:02:07.274580 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:02:07.274595 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:02:07.274612 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:02:07.274626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:02:07.274642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:02:07.274668 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:02:07.274699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:02:07.274719 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:02:07.274734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:02:07.274755 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 00:02:07.274770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:02:07.274785 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:02:07.274800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:02:07.274815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:02:07.274833 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:02:07.274848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:02:07.274863 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:02:07.274877 systemd[1]: Reached target slices.target - Slice Units. May 14 00:02:07.274892 systemd[1]: Reached target swap.target - Swaps. May 14 00:02:07.274912 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:02:07.274927 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:02:07.274942 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:02:07.274957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:02:07.274974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:02:07.274993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:02:07.275008 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:02:07.275022 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:02:07.275037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:02:07.275052 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:02:07.275067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:07.275082 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:02:07.275096 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:02:07.275113 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:02:07.275129 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:02:07.275144 systemd[1]: Reached target machines.target - Containers. May 14 00:02:07.275159 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:02:07.275174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:07.275189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:02:07.275204 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:02:07.275219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:07.275233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:02:07.275251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:07.275266 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:02:07.275280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:07.275297 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:02:07.275312 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:02:07.275326 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:02:07.275341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:02:07.275355 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:02:07.275374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:07.275389 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:02:07.275403 kernel: loop: module loaded May 14 00:02:07.275417 kernel: fuse: init (API version 7.39) May 14 00:02:07.275431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:02:07.275446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:02:07.275461 kernel: ACPI: bus type drm_connector registered May 14 00:02:07.275494 systemd-journald[1126]: Collecting audit messages is disabled. May 14 00:02:07.275529 systemd-journald[1126]: Journal started May 14 00:02:07.275556 systemd-journald[1126]: Runtime Journal (/run/log/journal/798944795fc54ccfa8843c68f459753d) is 6M, max 48.3M, 42.3M free. May 14 00:02:06.587220 systemd[1]: Queued start job for default target multi-user.target. May 14 00:02:06.601541 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 00:02:06.602025 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:02:07.279436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:02:07.287889 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:02:07.287960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:02:07.325345 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:02:07.325411 systemd[1]: Stopped verity-setup.service. May 14 00:02:07.327706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:07.332882 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:02:07.333696 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:02:07.369746 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:02:07.371314 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:02:07.372666 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:02:07.374162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:02:07.375673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:02:07.377280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:02:07.379208 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:02:07.379489 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:02:07.381321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:07.381583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:07.383360 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:02:07.383641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:02:07.385371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:07.385654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:07.387841 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:02:07.388116 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:02:07.389994 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:07.390258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:07.392305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:02:07.421996 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:02:07.424020 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:02:07.426295 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:02:07.444997 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:02:07.450273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:02:07.454865 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:02:07.456134 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:02:07.456174 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:02:07.478984 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:02:07.485322 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:02:07.489600 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:02:07.518585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:07.520722 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:02:07.523097 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:02:07.526498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:02:07.537761 systemd-journald[1126]: Time spent on flushing to /var/log/journal/798944795fc54ccfa8843c68f459753d is 19.505ms for 965 entries. May 14 00:02:07.537761 systemd-journald[1126]: System Journal (/var/log/journal/798944795fc54ccfa8843c68f459753d) is 8M, max 195.6M, 187.6M free. May 14 00:02:07.691340 systemd-journald[1126]: Received client request to flush runtime journal. May 14 00:02:07.691372 kernel: loop0: detected capacity change from 0 to 109808 May 14 00:02:07.691386 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:02:07.533344 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:02:07.559734 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:02:07.565182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:02:07.567078 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:02:07.574911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:02:07.605479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:02:07.607211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:02:07.609949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:02:07.611393 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:02:07.613015 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:02:07.622837 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:02:07.664239 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 14 00:02:07.664256 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 14 00:02:07.665864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:07.671613 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:02:07.674162 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:02:07.680232 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:02:07.683006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:02:07.694915 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:02:07.709170 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:02:07.711754 kernel: loop1: detected capacity change from 0 to 205544 May 14 00:02:07.712574 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:02:07.798297 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:02:07.825278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:02:07.826740 kernel: loop2: detected capacity change from 0 to 151640 May 14 00:02:07.860477 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. May 14 00:02:07.860500 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. May 14 00:02:07.867441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:02:07.876724 kernel: loop3: detected capacity change from 0 to 109808 May 14 00:02:07.915787 kernel: loop4: detected capacity change from 0 to 205544 May 14 00:02:07.998857 kernel: loop5: detected capacity change from 0 to 151640 May 14 00:02:08.009309 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 00:02:08.010138 (sd-merge)[1209]: Merged extensions into '/usr'. May 14 00:02:08.021397 systemd[1]: Reload requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:02:08.021418 systemd[1]: Reloading... May 14 00:02:08.079722 zram_generator::config[1237]: No configuration found. May 14 00:02:08.207100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:08.223219 ldconfig[1176]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:02:08.272363 systemd[1]: Reloading finished in 250 ms. May 14 00:02:08.291180 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:02:08.296181 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:02:08.298085 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:02:08.316245 systemd[1]: Starting ensure-sysext.service... May 14 00:02:08.318413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:02:08.345322 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... May 14 00:02:08.345343 systemd[1]: Reloading... May 14 00:02:08.351881 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:02:08.352184 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:02:08.353292 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:02:08.353677 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 14 00:02:08.353803 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 14 00:02:08.358534 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:02:08.358553 systemd-tmpfiles[1278]: Skipping /boot May 14 00:02:08.372941 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:02:08.372960 systemd-tmpfiles[1278]: Skipping /boot May 14 00:02:08.411720 zram_generator::config[1307]: No configuration found. May 14 00:02:08.536117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:08.604633 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:02:08.605330 systemd[1]: Reloading finished in 259 ms. May 14 00:02:08.636478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:02:08.646546 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:08.662009 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:02:08.665328 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:02:08.700180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:02:08.703963 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:02:08.719172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:02:08.724901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.725099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:08.726442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:08.728759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:08.759705 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:08.761230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:08.761382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:08.764001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:02:08.768047 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:02:08.769190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.770943 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:02:08.797081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:08.797393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:08.799618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:08.799995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:08.801059 augenrules[1375]: No rules May 14 00:02:08.802395 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:08.802667 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:08.804485 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:08.804785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:08.814210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.814551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:08.818319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:08.825008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:08.833407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:08.851771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:08.851939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:08.852284 systemd-udevd[1368]: Using default interface naming scheme 'v255'. May 14 00:02:08.854625 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:02:08.855770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.858519 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:02:08.893729 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:02:08.896402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:08.896992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:08.902169 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:02:08.920443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:08.920808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:08.924387 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:08.924633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:08.927390 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:02:08.954591 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:02:08.979419 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 00:02:08.980711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1408) May 14 00:02:08.981365 systemd[1]: Finished ensure-sysext.service. May 14 00:02:08.997934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:09.001947 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:09.011070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:09.021169 systemd-resolved[1348]: Positive Trust Anchors: May 14 00:02:09.021191 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:02:09.021223 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:02:09.023104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:09.045157 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:02:09.049224 systemd-resolved[1348]: Defaulting to hostname 'linux'. May 14 00:02:09.051338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:09.075003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:09.076538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:09.076584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:09.078266 augenrules[1427]: /sbin/augenrules: No change May 14 00:02:09.081907 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:02:09.100771 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 00:02:09.104564 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:02:09.129838 augenrules[1450]: No rules May 14 00:02:09.130391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:02:09.130444 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:09.136619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:02:09.141982 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 14 00:02:09.143928 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:09.144422 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:09.202750 kernel: ACPI: button: Power Button [PWRF] May 14 00:02:09.202836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 00:02:09.203082 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 00:02:09.203273 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 00:02:09.167394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:09.167756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:09.218910 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:02:09.219240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:02:09.221278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:09.221547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:09.271175 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:09.271480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:09.272709 kernel: mousedev: PS/2 mouse device common for all mice May 14 00:02:09.309029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:02:09.328579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:02:09.356711 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:02:09.358325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:02:09.358444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:02:09.363882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:02:09.402827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:02:09.420804 kernel: kvm_amd: TSC scaling supported May 14 00:02:09.420851 kernel: kvm_amd: Nested Virtualization enabled May 14 00:02:09.420866 kernel: kvm_amd: Nested Paging enabled May 14 00:02:09.420882 kernel: kvm_amd: LBR virtualization supported May 14 00:02:09.421883 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 00:02:09.421938 kernel: kvm_amd: Virtual GIF supported May 14 00:02:09.448721 kernel: EDAC MC: Ver: 3.0.0 May 14 00:02:09.472079 systemd-networkd[1444]: lo: Link UP May 14 00:02:09.472092 systemd-networkd[1444]: lo: Gained carrier May 14 00:02:09.473888 systemd-networkd[1444]: Enumeration completed May 14 00:02:09.474263 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:09.474268 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:02:09.475010 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:02:09.475055 systemd-networkd[1444]: eth0: Link UP May 14 00:02:09.475059 systemd-networkd[1444]: eth0: Gained carrier May 14 00:02:09.475072 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:09.494757 systemd-networkd[1444]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:02:09.495394 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 14 00:02:10.627628 systemd-resolved[1348]: Clock change detected. Flushing caches. May 14 00:02:10.627668 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:02:10.627710 systemd-timesyncd[1454]: Initial clock synchronization to Wed 2025-05-14 00:02:10.627588 UTC. May 14 00:02:10.638178 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:02:10.639959 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:02:10.641664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:02:10.644370 systemd[1]: Reached target network.target - Network. May 14 00:02:10.645380 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:02:10.648124 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:02:10.688581 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:02:10.711736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:02:10.716320 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:02:10.729114 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:02:10.766492 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:02:10.768475 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:02:10.770142 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:02:10.771817 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:02:10.773486 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:02:10.775589 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:02:10.777291 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:02:10.779072 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:02:10.780818 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:02:10.780856 systemd[1]: Reached target paths.target - Path Units. May 14 00:02:10.782139 systemd[1]: Reached target timers.target - Timer Units. May 14 00:02:10.784465 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:02:10.787690 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:02:10.793993 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:02:10.795841 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:02:10.797315 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:02:10.802293 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:02:10.804000 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:02:10.807029 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:02:10.809052 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:02:10.810311 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:02:10.811320 systemd[1]: Reached target basic.target - Basic System. May 14 00:02:10.811446 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:02:10.811486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:02:10.812805 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:02:10.815273 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:02:10.817314 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:02:10.817850 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:02:10.843505 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:02:10.870659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:02:10.872451 jq[1488]: false May 14 00:02:10.873027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:02:10.877370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:02:10.882343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:02:10.886401 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:02:10.889548 extend-filesystems[1489]: Found loop3 May 14 00:02:10.907628 extend-filesystems[1489]: Found loop4 May 14 00:02:10.907628 extend-filesystems[1489]: Found loop5 May 14 00:02:10.907628 extend-filesystems[1489]: Found sr0 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda May 14 00:02:10.907628 extend-filesystems[1489]: Found vda1 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda2 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda3 May 14 00:02:10.907628 extend-filesystems[1489]: Found usr May 14 00:02:10.907628 extend-filesystems[1489]: Found vda4 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda6 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda7 May 14 00:02:10.907628 extend-filesystems[1489]: Found vda9 May 14 00:02:10.907628 extend-filesystems[1489]: Checking size of /dev/vda9 May 14 00:02:10.914545 dbus-daemon[1487]: [system] SELinux support is enabled May 14 00:02:10.921790 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:02:10.924085 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:02:10.924687 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:02:10.926151 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:02:10.930147 extend-filesystems[1489]: Resized partition /dev/vda9 May 14 00:02:10.931338 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:02:10.934036 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:02:10.936470 extend-filesystems[1508]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:02:10.940026 jq[1507]: true May 14 00:02:10.941199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:02:10.945177 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:02:10.945449 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:02:10.945784 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:02:10.946025 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:02:10.991647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:02:10.991937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:02:10.992258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1396) May 14 00:02:11.004479 jq[1511]: true May 14 00:02:11.025681 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:02:11.029189 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:02:11.029239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:02:11.049387 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:02:11.049411 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:02:11.069148 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:02:11.073247 tar[1510]: linux-amd64/helm May 14 00:02:11.079682 update_engine[1503]: I20250514 00:02:11.079033 1503 main.cc:92] Flatcar Update Engine starting May 14 00:02:11.085090 update_engine[1503]: I20250514 00:02:11.083798 1503 update_check_scheduler.cc:74] Next update check in 7m51s May 14 00:02:11.085808 systemd-logind[1502]: Watching system buttons on /dev/input/event1 (Power Button) May 14 00:02:11.085836 systemd-logind[1502]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 00:02:11.088983 systemd-logind[1502]: New seat seat0. May 14 00:02:11.121968 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:02:11.139373 systemd[1]: Started update-engine.service - Update Engine. May 14 00:02:11.145736 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:02:11.178066 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:02:11.296054 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:02:11.344034 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:02:11.394133 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:02:11.402736 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:02:11.405309 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:45558.service - OpenSSH per-connection server daemon (10.0.0.1:45558). May 14 00:02:11.441522 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:02:11.441829 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:02:11.472731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:02:11.599256 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:02:12.359406 containerd[1521]: time="2025-05-14T00:02:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:02:12.359725 sshd[1557]: Connection closed by authenticating user core 10.0.0.1 port 45558 [preauth] May 14 00:02:11.610732 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:02:12.360097 containerd[1521]: time="2025-05-14T00:02:12.359864412Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:02:11.664174 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:02:12.360271 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:02:12.360271 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:02:12.360271 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:02:11.665332 systemd-networkd[1444]: eth0: Gained IPv6LL May 14 00:02:12.366984 extend-filesystems[1489]: Resized filesystem in /dev/vda9 May 14 00:02:11.668422 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 00:02:11.671731 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:02:11.673879 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:02:11.676921 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:02:11.680415 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 00:02:11.719688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:11.728817 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:02:11.779580 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:02:11.779862 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 00:02:11.796685 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:02:11.798479 systemd[1]: sshd@0-10.0.0.109:22-10.0.0.1:45558.service: Deactivated successfully. May 14 00:02:12.362100 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:02:12.362471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376423573Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.915µs" May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376472955Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376492311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376710210Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376751498Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376783688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376855954Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.376866543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.377216360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.377230115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.377246586Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:02:12.377245 containerd[1521]: time="2025-05-14T00:02:12.377254942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:02:12.377505 containerd[1521]: time="2025-05-14T00:02:12.377345342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:02:12.377601 containerd[1521]: time="2025-05-14T00:02:12.377575373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:02:12.377633 containerd[1521]: time="2025-05-14T00:02:12.377614086Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:02:12.377633 containerd[1521]: time="2025-05-14T00:02:12.377624776Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:02:12.377676 containerd[1521]: time="2025-05-14T00:02:12.377664440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:02:12.377916 containerd[1521]: time="2025-05-14T00:02:12.377893139Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:02:12.377982 containerd[1521]: time="2025-05-14T00:02:12.377963291Z" level=info msg="metadata content store policy set" policy=shared May 14 00:02:12.386733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:02:12.442336 tar[1510]: linux-amd64/LICENSE May 14 00:02:12.442860 tar[1510]: linux-amd64/README.md May 14 00:02:12.472794 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:02:12.854663 bash[1537]: Updated "/home/core/.ssh/authorized_keys" May 14 00:02:12.857007 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:02:12.859992 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 00:02:12.994424 containerd[1521]: time="2025-05-14T00:02:12.994335368Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994443852Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994468929Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994486261Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994503484Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994517971Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994533690Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:02:12.994555 containerd[1521]: time="2025-05-14T00:02:12.994551905Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:02:12.994727 containerd[1521]: time="2025-05-14T00:02:12.994569888Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:02:12.994727 containerd[1521]: time="2025-05-14T00:02:12.994585748Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:02:12.994727 containerd[1521]: time="2025-05-14T00:02:12.994598352Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:02:12.994727 containerd[1521]: time="2025-05-14T00:02:12.994616706Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:02:12.994866 containerd[1521]: time="2025-05-14T00:02:12.994848090Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:02:12.994896 containerd[1521]: time="2025-05-14T00:02:12.994876213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:02:12.994924 containerd[1521]: time="2025-05-14T00:02:12.994902312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:02:12.994949 containerd[1521]: time="2025-05-14T00:02:12.994923271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:02:12.994949 containerd[1521]: time="2025-05-14T00:02:12.994942127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:02:12.995003 containerd[1521]: time="2025-05-14T00:02:12.994960711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:02:12.995003 containerd[1521]: time="2025-05-14T00:02:12.994980298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:02:12.995003 containerd[1521]: time="2025-05-14T00:02:12.994996959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:02:12.995085 containerd[1521]: time="2025-05-14T00:02:12.995014693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:02:12.995085 containerd[1521]: time="2025-05-14T00:02:12.995060278Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:02:12.995085 containerd[1521]: time="2025-05-14T00:02:12.995077040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:02:12.995277 containerd[1521]: time="2025-05-14T00:02:12.995242831Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:02:12.995277 containerd[1521]: time="2025-05-14T00:02:12.995271254Z" level=info msg="Start snapshots syncer" May 14 00:02:12.995319 containerd[1521]: time="2025-05-14T00:02:12.995293836Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:02:12.996450 containerd[1521]: time="2025-05-14T00:02:12.995977939Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:02:12.996450 containerd[1521]: time="2025-05-14T00:02:12.996141907Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996353253Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996551415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996598614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996624412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996639130Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996663105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996685697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:02:12.996725 containerd[1521]: time="2025-05-14T00:02:12.996709812Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:02:12.996933 containerd[1521]: time="2025-05-14T00:02:12.996775556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:02:12.996933 containerd[1521]: time="2025-05-14T00:02:12.996804390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:02:12.996933 containerd[1521]: time="2025-05-14T00:02:12.996833374Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:02:12.997012 containerd[1521]: time="2025-05-14T00:02:12.996969129Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:02:12.997100 containerd[1521]: time="2025-05-14T00:02:12.997051173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:02:12.997100 containerd[1521]: time="2025-05-14T00:02:12.997070769Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:02:12.997100 containerd[1521]: time="2025-05-14T00:02:12.997085637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:02:12.997100 containerd[1521]: time="2025-05-14T00:02:12.997098672Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:02:12.997100 containerd[1521]: time="2025-05-14T00:02:12.997112778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997130060Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997158053Z" level=info msg="runtime interface created" May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997164144Z" level=info msg="created NRI interface" May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997177610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997197757Z" level=info msg="Connect containerd service" May 14 00:02:12.997373 containerd[1521]: time="2025-05-14T00:02:12.997276235Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:02:12.998467 containerd[1521]: time="2025-05-14T00:02:12.998437613Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165692746Z" level=info msg="Start subscribing containerd event" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165766454Z" level=info msg="Start recovering state" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165918650Z" level=info msg="Start event monitor" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165929801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165998409Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.165945199Z" level=info msg="Start cni network conf syncer for default" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166058141Z" level=info msg="Start streaming server" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166073079Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166135677Z" level=info msg="runtime interface starting up..." May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166144854Z" level=info msg="starting plugins..." May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166163669Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:02:13.168317 containerd[1521]: time="2025-05-14T00:02:13.166615236Z" level=info msg="containerd successfully booted in 1.276791s" May 14 00:02:13.166481 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:02:13.802931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:13.804972 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:02:13.806321 systemd[1]: Startup finished in 815ms (kernel) + 10.133s (initrd) + 6.870s (userspace) = 17.820s. May 14 00:02:13.812626 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:14.244597 kubelet[1621]: E0514 00:02:14.244410 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:14.249571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:14.249809 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:14.250275 systemd[1]: kubelet.service: Consumed 1.258s CPU time, 239.3M memory peak. May 14 00:02:21.808835 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:40612.service - OpenSSH per-connection server daemon (10.0.0.1:40612). May 14 00:02:21.863447 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 40612 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:21.865397 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:21.876541 systemd-logind[1502]: New session 1 of user core. May 14 00:02:21.878062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:02:21.879377 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:02:21.907713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:02:21.910452 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:02:21.926699 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:02:21.929153 systemd-logind[1502]: New session c1 of user core. May 14 00:02:22.085585 systemd[1639]: Queued start job for default target default.target. May 14 00:02:22.096704 systemd[1639]: Created slice app.slice - User Application Slice. May 14 00:02:22.096736 systemd[1639]: Reached target paths.target - Paths. May 14 00:02:22.096806 systemd[1639]: Reached target timers.target - Timers. May 14 00:02:22.098597 systemd[1639]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:02:22.112455 systemd[1639]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:02:22.112612 systemd[1639]: Reached target sockets.target - Sockets. May 14 00:02:22.112660 systemd[1639]: Reached target basic.target - Basic System. May 14 00:02:22.112707 systemd[1639]: Reached target default.target - Main User Target. May 14 00:02:22.112752 systemd[1639]: Startup finished in 177ms. May 14 00:02:22.113169 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:02:22.114884 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:02:22.178508 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:40626.service - OpenSSH per-connection server daemon (10.0.0.1:40626). May 14 00:02:22.230296 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:22.232493 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.238527 systemd-logind[1502]: New session 2 of user core. May 14 00:02:22.249386 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:02:22.304286 sshd[1652]: Connection closed by 10.0.0.1 port 40626 May 14 00:02:22.304691 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.317314 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:40626.service: Deactivated successfully. May 14 00:02:22.319692 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:02:22.321190 systemd-logind[1502]: Session 2 logged out. Waiting for processes to exit. May 14 00:02:22.322777 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). May 14 00:02:22.323729 systemd-logind[1502]: Removed session 2. May 14 00:02:22.371846 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:22.373728 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.378290 systemd-logind[1502]: New session 3 of user core. May 14 00:02:22.389457 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:02:22.440112 sshd[1660]: Connection closed by 10.0.0.1 port 40628 May 14 00:02:22.440566 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.453662 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:40628.service: Deactivated successfully. May 14 00:02:22.455600 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:02:22.457503 systemd-logind[1502]: Session 3 logged out. Waiting for processes to exit. May 14 00:02:22.459015 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:40638.service - OpenSSH per-connection server daemon (10.0.0.1:40638). May 14 00:02:22.460081 systemd-logind[1502]: Removed session 3. May 14 00:02:22.513468 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:22.515535 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.520533 systemd-logind[1502]: New session 4 of user core. May 14 00:02:22.534514 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:02:22.592450 sshd[1668]: Connection closed by 10.0.0.1 port 40638 May 14 00:02:22.592779 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.608662 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:40638.service: Deactivated successfully. May 14 00:02:22.611195 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:02:22.613537 systemd-logind[1502]: Session 4 logged out. Waiting for processes to exit. May 14 00:02:22.615531 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:40650.service - OpenSSH per-connection server daemon (10.0.0.1:40650). May 14 00:02:22.616528 systemd-logind[1502]: Removed session 4. May 14 00:02:22.662655 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 40650 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:22.664438 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.670026 systemd-logind[1502]: New session 5 of user core. May 14 00:02:22.687482 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:02:22.749513 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:02:22.749941 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:22.778463 sudo[1677]: pam_unix(sudo:session): session closed for user root May 14 00:02:22.780377 sshd[1676]: Connection closed by 10.0.0.1 port 40650 May 14 00:02:22.780777 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.795520 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:40650.service: Deactivated successfully. May 14 00:02:22.798082 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:02:22.800573 systemd-logind[1502]: Session 5 logged out. Waiting for processes to exit. May 14 00:02:22.802716 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). May 14 00:02:22.803715 systemd-logind[1502]: Removed session 5. May 14 00:02:22.855996 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:22.857736 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.862665 systemd-logind[1502]: New session 6 of user core. May 14 00:02:22.880481 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:02:22.936349 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:02:22.936712 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:22.940893 sudo[1687]: pam_unix(sudo:session): session closed for user root May 14 00:02:22.947499 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:02:22.947844 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:22.958364 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:23.008900 augenrules[1709]: No rules May 14 00:02:23.010633 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:23.010932 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:23.012143 sudo[1686]: pam_unix(sudo:session): session closed for user root May 14 00:02:23.013792 sshd[1685]: Connection closed by 10.0.0.1 port 40656 May 14 00:02:23.014060 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 14 00:02:23.025007 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:40656.service: Deactivated successfully. May 14 00:02:23.027040 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:02:23.028961 systemd-logind[1502]: Session 6 logged out. Waiting for processes to exit. May 14 00:02:23.030253 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:40672.service - OpenSSH per-connection server daemon (10.0.0.1:40672). May 14 00:02:23.031006 systemd-logind[1502]: Removed session 6. May 14 00:02:23.086909 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 40672 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:02:23.088991 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:23.094297 systemd-logind[1502]: New session 7 of user core. May 14 00:02:23.105565 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:02:23.162793 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:02:23.163278 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:23.886668 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:02:23.900790 (dockerd)[1742]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:02:24.472869 dockerd[1742]: time="2025-05-14T00:02:24.472792603Z" level=info msg="Starting up" May 14 00:02:24.474065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:02:24.474847 dockerd[1742]: time="2025-05-14T00:02:24.474806361Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:02:24.475722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:24.869055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:24.881547 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:24.984822 kubelet[1775]: E0514 00:02:24.984749 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:24.991493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:24.991720 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:24.992105 systemd[1]: kubelet.service: Consumed 269ms CPU time, 94.2M memory peak. May 14 00:02:25.917356 dockerd[1742]: time="2025-05-14T00:02:25.917285028Z" level=info msg="Loading containers: start." May 14 00:02:26.503738 kernel: Initializing XFRM netlink socket May 14 00:02:26.745906 systemd-networkd[1444]: docker0: Link UP May 14 00:02:26.867823 dockerd[1742]: time="2025-05-14T00:02:26.863673993Z" level=info msg="Loading containers: done." May 14 00:02:26.954002 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1910145770-merged.mount: Deactivated successfully. May 14 00:02:26.957251 dockerd[1742]: time="2025-05-14T00:02:26.957148284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:02:26.957669 dockerd[1742]: time="2025-05-14T00:02:26.957344973Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:02:26.957669 dockerd[1742]: time="2025-05-14T00:02:26.957520523Z" level=info msg="Daemon has completed initialization" May 14 00:02:27.006873 dockerd[1742]: time="2025-05-14T00:02:27.006784297Z" level=info msg="API listen on /run/docker.sock" May 14 00:02:27.007083 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:02:28.319254 containerd[1521]: time="2025-05-14T00:02:28.318731555Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 00:02:32.074217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25070857.mount: Deactivated successfully. May 14 00:02:34.415652 containerd[1521]: time="2025-05-14T00:02:34.415577746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.416347 containerd[1521]: time="2025-05-14T00:02:34.416275986Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 00:02:34.417686 containerd[1521]: time="2025-05-14T00:02:34.417635576Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.421794 containerd[1521]: time="2025-05-14T00:02:34.421760052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.422746 containerd[1521]: time="2025-05-14T00:02:34.422697791Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 6.103917525s" May 14 00:02:34.422746 containerd[1521]: time="2025-05-14T00:02:34.422738788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 00:02:34.424688 containerd[1521]: time="2025-05-14T00:02:34.424638201Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 00:02:35.116362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:02:35.118602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:35.306049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:35.310708 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:35.783233 kubelet[2029]: E0514 00:02:35.783129 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:35.787673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:35.787880 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:35.788241 systemd[1]: kubelet.service: Consumed 647ms CPU time, 97.1M memory peak. May 14 00:02:36.871128 containerd[1521]: time="2025-05-14T00:02:36.871055090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:36.872221 containerd[1521]: time="2025-05-14T00:02:36.872128333Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 00:02:36.873485 containerd[1521]: time="2025-05-14T00:02:36.873449561Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:36.877355 containerd[1521]: time="2025-05-14T00:02:36.877298972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:36.878285 containerd[1521]: time="2025-05-14T00:02:36.878248823Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.453555529s" May 14 00:02:36.878331 containerd[1521]: time="2025-05-14T00:02:36.878286734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 00:02:36.878905 containerd[1521]: time="2025-05-14T00:02:36.878877793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 00:02:39.360256 containerd[1521]: time="2025-05-14T00:02:39.360154389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.392627 containerd[1521]: time="2025-05-14T00:02:39.392524586Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 00:02:39.423184 containerd[1521]: time="2025-05-14T00:02:39.423103383Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.464039 containerd[1521]: time="2025-05-14T00:02:39.463938448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.465134 containerd[1521]: time="2025-05-14T00:02:39.465073106Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.586160809s" May 14 00:02:39.465134 containerd[1521]: time="2025-05-14T00:02:39.465125695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 00:02:39.465869 containerd[1521]: time="2025-05-14T00:02:39.465817102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 00:02:40.725734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050947053.mount: Deactivated successfully. May 14 00:02:41.812855 containerd[1521]: time="2025-05-14T00:02:41.812776607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:41.813871 containerd[1521]: time="2025-05-14T00:02:41.813812871Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 00:02:41.815310 containerd[1521]: time="2025-05-14T00:02:41.815269353Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:41.817860 containerd[1521]: time="2025-05-14T00:02:41.817797244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:41.818494 containerd[1521]: time="2025-05-14T00:02:41.818454487Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.352594214s" May 14 00:02:41.818545 containerd[1521]: time="2025-05-14T00:02:41.818491857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 00:02:41.819182 containerd[1521]: time="2025-05-14T00:02:41.819123712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:02:43.059994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090400766.mount: Deactivated successfully. May 14 00:02:44.221558 containerd[1521]: time="2025-05-14T00:02:44.221490695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:44.222507 containerd[1521]: time="2025-05-14T00:02:44.222444926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 00:02:44.224076 containerd[1521]: time="2025-05-14T00:02:44.224008732Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:44.227948 containerd[1521]: time="2025-05-14T00:02:44.227817177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:44.228993 containerd[1521]: time="2025-05-14T00:02:44.228892100Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.409721981s" May 14 00:02:44.228993 containerd[1521]: time="2025-05-14T00:02:44.228939312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 00:02:44.229587 containerd[1521]: time="2025-05-14T00:02:44.229558756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:02:45.264357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245373336.mount: Deactivated successfully. May 14 00:02:45.273496 containerd[1521]: time="2025-05-14T00:02:45.273429680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:45.274462 containerd[1521]: time="2025-05-14T00:02:45.274407933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 00:02:45.276148 containerd[1521]: time="2025-05-14T00:02:45.276074061Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:45.278526 containerd[1521]: time="2025-05-14T00:02:45.278476026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:45.279284 containerd[1521]: time="2025-05-14T00:02:45.279239676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.049650782s" May 14 00:02:45.279284 containerd[1521]: time="2025-05-14T00:02:45.279280104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 00:02:45.279876 containerd[1521]: time="2025-05-14T00:02:45.279824712Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 00:02:45.866114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:02:45.867992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:46.059815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:46.077561 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:46.464845 kubelet[2109]: E0514 00:02:46.464784 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:46.470262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:46.470488 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:46.471008 systemd[1]: kubelet.service: Consumed 234ms CPU time, 97.5M memory peak. May 14 00:02:46.923575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955201742.mount: Deactivated successfully. May 14 00:02:52.742141 containerd[1521]: time="2025-05-14T00:02:52.742053493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:52.747075 containerd[1521]: time="2025-05-14T00:02:52.746986351Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 00:02:52.749683 containerd[1521]: time="2025-05-14T00:02:52.749624754Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:52.754659 containerd[1521]: time="2025-05-14T00:02:52.754609521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:52.755918 containerd[1521]: time="2025-05-14T00:02:52.755880275Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.47601238s" May 14 00:02:52.755918 containerd[1521]: time="2025-05-14T00:02:52.755913638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 00:02:55.134008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:55.134182 systemd[1]: kubelet.service: Consumed 234ms CPU time, 97.5M memory peak. May 14 00:02:55.136635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:55.173294 systemd[1]: Reload requested from client PID 2201 ('systemctl') (unit session-7.scope)... May 14 00:02:55.173322 systemd[1]: Reloading... May 14 00:02:55.260283 zram_generator::config[2256]: No configuration found. May 14 00:02:55.996564 update_engine[1503]: I20250514 00:02:55.996445 1503 update_attempter.cc:509] Updating boot flags... May 14 00:02:56.251192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:56.376918 systemd[1]: Reloading finished in 1203 ms. May 14 00:02:56.452859 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:56.456265 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:02:56.456631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:56.456688 systemd[1]: kubelet.service: Consumed 173ms CPU time, 83.6M memory peak. May 14 00:02:56.458962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:56.481283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2294) May 14 00:02:56.517228 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2296) May 14 00:02:56.584768 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2296) May 14 00:02:56.695291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:56.702127 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:02:56.778467 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:02:56.778467 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:02:56.778467 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:02:56.778967 kubelet[2309]: I0514 00:02:56.778512 2309 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:02:57.172512 kubelet[2309]: I0514 00:02:57.172446 2309 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:02:57.172512 kubelet[2309]: I0514 00:02:57.172487 2309 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:02:57.172846 kubelet[2309]: I0514 00:02:57.172810 2309 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:02:57.200054 kubelet[2309]: E0514 00:02:57.200005 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:57.202579 kubelet[2309]: I0514 00:02:57.202486 2309 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:02:57.215686 kubelet[2309]: I0514 00:02:57.215635 2309 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:02:57.226016 kubelet[2309]: I0514 00:02:57.225962 2309 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:02:57.226137 kubelet[2309]: I0514 00:02:57.226116 2309 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:02:57.226369 kubelet[2309]: I0514 00:02:57.226321 2309 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:02:57.226552 kubelet[2309]: I0514 00:02:57.226354 2309 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:02:57.226782 kubelet[2309]: I0514 00:02:57.226565 2309 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:02:57.226782 kubelet[2309]: I0514 00:02:57.226578 2309 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:02:57.226782 kubelet[2309]: I0514 00:02:57.226723 2309 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:57.228372 kubelet[2309]: I0514 00:02:57.228337 2309 kubelet.go:408] "Attempting to sync node with API server" May 14 00:02:57.228372 kubelet[2309]: I0514 00:02:57.228362 2309 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:02:57.228447 kubelet[2309]: I0514 00:02:57.228409 2309 kubelet.go:314] "Adding apiserver pod source" May 14 00:02:57.228447 kubelet[2309]: I0514 00:02:57.228430 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:02:57.234323 kubelet[2309]: W0514 00:02:57.234242 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:57.234323 kubelet[2309]: E0514 00:02:57.234314 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:57.235438 kubelet[2309]: I0514 00:02:57.235414 2309 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:02:57.236747 kubelet[2309]: W0514 00:02:57.236680 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:57.236795 kubelet[2309]: E0514 00:02:57.236749 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:57.237750 kubelet[2309]: I0514 00:02:57.237733 2309 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:02:57.237833 kubelet[2309]: W0514 00:02:57.237819 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:02:57.240175 kubelet[2309]: I0514 00:02:57.238578 2309 server.go:1269] "Started kubelet" May 14 00:02:57.240175 kubelet[2309]: I0514 00:02:57.238804 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:02:57.240175 kubelet[2309]: I0514 00:02:57.239032 2309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:02:57.240175 kubelet[2309]: I0514 00:02:57.239241 2309 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:02:57.245857 kubelet[2309]: I0514 00:02:57.245773 2309 server.go:460] "Adding debug handlers to kubelet server" May 14 00:02:57.246129 kubelet[2309]: I0514 00:02:57.246107 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:02:57.247032 kubelet[2309]: I0514 00:02:57.247001 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:02:57.247885 kubelet[2309]: I0514 00:02:57.247869 2309 factory.go:221] Registration of the systemd container factory successfully May 14 00:02:57.248051 kubelet[2309]: I0514 00:02:57.248033 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:02:57.249974 kubelet[2309]: I0514 00:02:57.249927 2309 factory.go:221] Registration of the containerd container factory successfully May 14 00:02:57.251271 kubelet[2309]: E0514 00:02:57.250848 2309 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:02:57.251783 kubelet[2309]: I0514 00:02:57.251763 2309 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:02:57.252504 kubelet[2309]: I0514 00:02:57.252301 2309 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:02:57.252504 kubelet[2309]: I0514 00:02:57.252380 2309 reconciler.go:26] "Reconciler: start to sync state" May 14 00:02:57.252741 kubelet[2309]: E0514 00:02:57.252711 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.257241 kubelet[2309]: W0514 00:02:57.257174 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:57.257282 kubelet[2309]: E0514 00:02:57.257251 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:57.257840 kubelet[2309]: E0514 00:02:57.257794 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" May 14 00:02:57.261509 kubelet[2309]: E0514 00:02:57.257692 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bd7829faf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:02:57.238552328 +0000 UTC m=+0.507796911,LastTimestamp:2025-05-14 00:02:57.238552328 +0000 UTC m=+0.507796911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:02:57.268668 kubelet[2309]: I0514 00:02:57.268616 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:02:57.268668 kubelet[2309]: I0514 00:02:57.268636 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:02:57.268668 kubelet[2309]: I0514 00:02:57.268652 2309 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:57.353532 kubelet[2309]: E0514 00:02:57.353458 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.454133 kubelet[2309]: E0514 00:02:57.453976 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.458557 kubelet[2309]: E0514 00:02:57.458526 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" May 14 00:02:57.555024 kubelet[2309]: E0514 00:02:57.554963 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.655705 kubelet[2309]: E0514 00:02:57.655601 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.756413 kubelet[2309]: E0514 00:02:57.756285 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:57.759060 kubelet[2309]: I0514 00:02:57.759014 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:02:57.759364 kubelet[2309]: I0514 00:02:57.759345 2309 policy_none.go:49] "None policy: Start" May 14 00:02:57.760144 kubelet[2309]: I0514 00:02:57.760098 2309 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:02:57.760144 kubelet[2309]: I0514 00:02:57.760126 2309 state_mem.go:35] "Initializing new in-memory state store" May 14 00:02:57.760768 kubelet[2309]: I0514 00:02:57.760736 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:02:57.760831 kubelet[2309]: I0514 00:02:57.760770 2309 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:02:57.762229 kubelet[2309]: I0514 00:02:57.760986 2309 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:02:57.762229 kubelet[2309]: E0514 00:02:57.761042 2309 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:02:57.762229 kubelet[2309]: W0514 00:02:57.761550 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:57.762229 kubelet[2309]: E0514 00:02:57.761619 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:57.770704 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:02:57.788223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:02:57.791925 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:02:57.802625 kubelet[2309]: I0514 00:02:57.802581 2309 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:02:57.802993 kubelet[2309]: I0514 00:02:57.802883 2309 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:02:57.802993 kubelet[2309]: I0514 00:02:57.802914 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:02:57.803546 kubelet[2309]: I0514 00:02:57.803254 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:02:57.804487 kubelet[2309]: E0514 00:02:57.804465 2309 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:02:57.859848 kubelet[2309]: E0514 00:02:57.859775 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" May 14 00:02:57.870545 systemd[1]: Created slice kubepods-burstable-poda90c40cad0f2e426cb63d3b6b3f39408.slice - libcontainer container kubepods-burstable-poda90c40cad0f2e426cb63d3b6b3f39408.slice. May 14 00:02:57.882795 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 00:02:57.896746 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 00:02:57.906090 kubelet[2309]: I0514 00:02:57.906022 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:57.906619 kubelet[2309]: E0514 00:02:57.906559 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 14 00:02:57.958162 kubelet[2309]: I0514 00:02:57.958109 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:57.958162 kubelet[2309]: I0514 00:02:57.958164 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:57.958462 kubelet[2309]: I0514 00:02:57.958193 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:57.958462 kubelet[2309]: I0514 00:02:57.958247 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:57.958462 kubelet[2309]: I0514 00:02:57.958272 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:57.958462 kubelet[2309]: I0514 00:02:57.958301 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:57.958462 kubelet[2309]: I0514 00:02:57.958325 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:02:57.958580 kubelet[2309]: I0514 00:02:57.958344 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:57.958580 kubelet[2309]: I0514 00:02:57.958366 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:58.109343 kubelet[2309]: I0514 00:02:58.109172 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:58.109721 kubelet[2309]: E0514 00:02:58.109667 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 14 00:02:58.119570 kubelet[2309]: W0514 00:02:58.119482 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:58.119570 kubelet[2309]: E0514 00:02:58.119567 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:58.181912 kubelet[2309]: E0514 00:02:58.181858 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:58.182656 containerd[1521]: time="2025-05-14T00:02:58.182600175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a90c40cad0f2e426cb63d3b6b3f39408,Namespace:kube-system,Attempt:0,}" May 14 00:02:58.194902 kubelet[2309]: E0514 00:02:58.194872 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:58.195466 containerd[1521]: time="2025-05-14T00:02:58.195404257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 00:02:58.199694 kubelet[2309]: E0514 00:02:58.199653 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:58.200032 containerd[1521]: time="2025-05-14T00:02:58.199998042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 00:02:58.215874 kubelet[2309]: W0514 00:02:58.215826 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:58.216070 kubelet[2309]: E0514 00:02:58.215882 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:58.511095 kubelet[2309]: I0514 00:02:58.511060 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:58.512166 kubelet[2309]: E0514 00:02:58.511410 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 14 00:02:58.595587 kubelet[2309]: W0514 00:02:58.595520 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:58.595587 kubelet[2309]: E0514 00:02:58.595578 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:58.660675 kubelet[2309]: E0514 00:02:58.660616 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" May 14 00:02:58.699534 kubelet[2309]: W0514 00:02:58.699438 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:02:58.699534 kubelet[2309]: E0514 00:02:58.699527 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:59.313691 kubelet[2309]: I0514 00:02:59.313652 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:59.314098 kubelet[2309]: E0514 00:02:59.314035 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 14 00:02:59.341098 kubelet[2309]: E0514 00:02:59.341022 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.261850 kubelet[2309]: E0514 00:03:00.261782 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="3.2s" May 14 00:03:00.271796 containerd[1521]: time="2025-05-14T00:03:00.271695155Z" level=info msg="connecting to shim 61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855" address="unix:///run/containerd/s/b7f25bc41a2ccbe2c676cf1b5cf30d0a9fca2cb012813018f5ccf111546806c0" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:00.294706 containerd[1521]: time="2025-05-14T00:03:00.294652339Z" level=info msg="connecting to shim 884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752" address="unix:///run/containerd/s/509150120e66c983a78faabe44db11fcee2c55eda62924d0742e836046c7a23c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:00.306764 systemd[1]: Started cri-containerd-61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855.scope - libcontainer container 61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855. May 14 00:03:00.309796 containerd[1521]: time="2025-05-14T00:03:00.307333555Z" level=info msg="connecting to shim 0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a" address="unix:///run/containerd/s/f025147050d15826b81504345f48d729b77cd8f9f8cee2ad12b188692404c56e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:00.329340 systemd[1]: Started cri-containerd-884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752.scope - libcontainer container 884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752. May 14 00:03:00.336147 systemd[1]: Started cri-containerd-0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a.scope - libcontainer container 0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a. May 14 00:03:00.424854 containerd[1521]: time="2025-05-14T00:03:00.424768847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855\"" May 14 00:03:00.425913 kubelet[2309]: E0514 00:03:00.425869 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:00.427590 containerd[1521]: time="2025-05-14T00:03:00.427555527Z" level=info msg="CreateContainer within sandbox \"61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:03:00.441337 containerd[1521]: time="2025-05-14T00:03:00.441283175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a90c40cad0f2e426cb63d3b6b3f39408,Namespace:kube-system,Attempt:0,} returns sandbox id \"884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752\"" May 14 00:03:00.442200 kubelet[2309]: E0514 00:03:00.442154 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:00.444223 containerd[1521]: time="2025-05-14T00:03:00.444164334Z" level=info msg="CreateContainer within sandbox \"884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:03:00.452727 containerd[1521]: time="2025-05-14T00:03:00.452672233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a\"" May 14 00:03:00.453446 kubelet[2309]: E0514 00:03:00.453392 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:00.454964 containerd[1521]: time="2025-05-14T00:03:00.454922366Z" level=info msg="CreateContainer within sandbox \"0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:03:00.511657 kubelet[2309]: W0514 00:03:00.511599 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:03:00.511657 kubelet[2309]: E0514 00:03:00.511662 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.899369 kubelet[2309]: W0514 00:03:00.899314 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:03:00.899369 kubelet[2309]: E0514 00:03:00.899373 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.915843 kubelet[2309]: I0514 00:03:00.915818 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:03:00.916076 kubelet[2309]: E0514 00:03:00.916053 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 14 00:03:01.597046 containerd[1521]: time="2025-05-14T00:03:01.596986838Z" level=info msg="Container 38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:01.639835 kubelet[2309]: W0514 00:03:01.639778 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:03:01.639835 kubelet[2309]: E0514 00:03:01.639836 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:01.641027 kubelet[2309]: W0514 00:03:01.640981 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 14 00:03:01.641027 kubelet[2309]: E0514 00:03:01.641008 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:01.730903 containerd[1521]: time="2025-05-14T00:03:01.730838184Z" level=info msg="Container 82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:02.001376 containerd[1521]: time="2025-05-14T00:03:02.001307674Z" level=info msg="Container 49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:02.423798 containerd[1521]: time="2025-05-14T00:03:02.423734467Z" level=info msg="CreateContainer within sandbox \"61f1382c90bb667d699dc829c09471174c6c46a6965ce5482409663ce366c855\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a\"" May 14 00:03:02.424582 containerd[1521]: time="2025-05-14T00:03:02.424548177Z" level=info msg="StartContainer for \"38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a\"" May 14 00:03:02.425875 containerd[1521]: time="2025-05-14T00:03:02.425835874Z" level=info msg="connecting to shim 38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a" address="unix:///run/containerd/s/b7f25bc41a2ccbe2c676cf1b5cf30d0a9fca2cb012813018f5ccf111546806c0" protocol=ttrpc version=3 May 14 00:03:02.453416 systemd[1]: Started cri-containerd-38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a.scope - libcontainer container 38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a. May 14 00:03:02.549657 containerd[1521]: time="2025-05-14T00:03:02.549601847Z" level=info msg="CreateContainer within sandbox \"884a2eb6226a96ded094a6c81afa5457f0a8974462fdfadbec035f1b80965752\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8\"" May 14 00:03:02.549928 containerd[1521]: time="2025-05-14T00:03:02.549881617Z" level=info msg="CreateContainer within sandbox \"0196d70d5be0ebff9ce044e3ae2c0397001541c0003a0dc1e7d31a8c38a9d36a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814\"" May 14 00:03:02.550794 containerd[1521]: time="2025-05-14T00:03:02.550724222Z" level=info msg="StartContainer for \"49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814\"" May 14 00:03:02.551193 containerd[1521]: time="2025-05-14T00:03:02.550732417Z" level=info msg="StartContainer for \"82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8\"" May 14 00:03:02.551193 containerd[1521]: time="2025-05-14T00:03:02.551101425Z" level=info msg="StartContainer for \"38580dcf2e6124e1ceb7311fd731016672f85f3210f11453b088a49f6f1e395a\" returns successfully" May 14 00:03:02.553186 containerd[1521]: time="2025-05-14T00:03:02.552314360Z" level=info msg="connecting to shim 49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814" address="unix:///run/containerd/s/f025147050d15826b81504345f48d729b77cd8f9f8cee2ad12b188692404c56e" protocol=ttrpc version=3 May 14 00:03:02.553186 containerd[1521]: time="2025-05-14T00:03:02.552586646Z" level=info msg="connecting to shim 82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8" address="unix:///run/containerd/s/509150120e66c983a78faabe44db11fcee2c55eda62924d0742e836046c7a23c" protocol=ttrpc version=3 May 14 00:03:02.575443 systemd[1]: Started cri-containerd-49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814.scope - libcontainer container 49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814. May 14 00:03:02.579939 systemd[1]: Started cri-containerd-82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8.scope - libcontainer container 82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8. May 14 00:03:02.768848 containerd[1521]: time="2025-05-14T00:03:02.768675622Z" level=info msg="StartContainer for \"49f36814fa4e4029334007b43c8ed7a0aee1f720508515cb954fce13d3575814\" returns successfully" May 14 00:03:02.771770 containerd[1521]: time="2025-05-14T00:03:02.771727768Z" level=info msg="StartContainer for \"82d130534a7ef7850ac189373813af2b1fa1a004f91de430261b669f5a2dd4b8\" returns successfully" May 14 00:03:02.777972 kubelet[2309]: E0514 00:03:02.777633 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:02.783813 kubelet[2309]: E0514 00:03:02.783578 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:02.785265 kubelet[2309]: E0514 00:03:02.785097 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:03.787852 kubelet[2309]: E0514 00:03:03.787812 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:03.787852 kubelet[2309]: E0514 00:03:03.787812 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:03.788874 kubelet[2309]: E0514 00:03:03.788013 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:04.117990 kubelet[2309]: I0514 00:03:04.117962 2309 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:03:04.597545 kubelet[2309]: E0514 00:03:04.597433 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:03:04.705555 kubelet[2309]: I0514 00:03:04.705490 2309 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:03:04.705555 kubelet[2309]: E0514 00:03:04.705538 2309 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 00:03:04.789946 kubelet[2309]: E0514 00:03:04.789911 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:04.874587 kubelet[2309]: E0514 00:03:04.874534 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:04.975195 kubelet[2309]: E0514 00:03:04.975123 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.075365 kubelet[2309]: E0514 00:03:05.075303 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.175796 kubelet[2309]: E0514 00:03:05.175632 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.276037 kubelet[2309]: E0514 00:03:05.275977 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.376615 kubelet[2309]: E0514 00:03:05.376561 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.477783 kubelet[2309]: E0514 00:03:05.477628 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.578361 kubelet[2309]: E0514 00:03:05.578289 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.679080 kubelet[2309]: E0514 00:03:05.679031 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.681978 kubelet[2309]: E0514 00:03:05.681938 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.779662 kubelet[2309]: E0514 00:03:05.779513 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.880535 kubelet[2309]: E0514 00:03:05.880476 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:05.981473 kubelet[2309]: E0514 00:03:05.981406 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.082431 kubelet[2309]: E0514 00:03:06.082290 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.183153 kubelet[2309]: E0514 00:03:06.183082 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.283387 kubelet[2309]: E0514 00:03:06.283308 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.383847 kubelet[2309]: E0514 00:03:06.383723 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.484127 kubelet[2309]: E0514 00:03:06.484064 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.584991 kubelet[2309]: E0514 00:03:06.584940 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.686072 kubelet[2309]: E0514 00:03:06.685936 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.786149 kubelet[2309]: E0514 00:03:06.786083 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.887231 kubelet[2309]: E0514 00:03:06.887161 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:06.987977 kubelet[2309]: E0514 00:03:06.987812 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.088552 kubelet[2309]: E0514 00:03:07.088494 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.189321 kubelet[2309]: E0514 00:03:07.189258 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.290272 kubelet[2309]: E0514 00:03:07.290095 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.390761 kubelet[2309]: E0514 00:03:07.390687 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.491656 kubelet[2309]: E0514 00:03:07.491601 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.592490 kubelet[2309]: E0514 00:03:07.592350 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.693040 kubelet[2309]: E0514 00:03:07.692983 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.793606 kubelet[2309]: E0514 00:03:07.793558 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.804808 kubelet[2309]: E0514 00:03:07.804779 2309 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:03:07.894185 kubelet[2309]: E0514 00:03:07.894112 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.242734 kubelet[2309]: I0514 00:03:08.242593 2309 apiserver.go:52] "Watching apiserver" May 14 00:03:08.252530 kubelet[2309]: I0514 00:03:08.252485 2309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:03:08.656686 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... May 14 00:03:08.656707 systemd[1]: Reloading... May 14 00:03:08.751314 zram_generator::config[2631]: No configuration found. May 14 00:03:08.884172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:03:09.025646 systemd[1]: Reloading finished in 368 ms. May 14 00:03:09.058154 kubelet[2309]: I0514 00:03:09.058098 2309 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:03:09.058311 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:03:09.077237 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:03:09.077639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:03:09.077711 systemd[1]: kubelet.service: Consumed 1.179s CPU time, 120.9M memory peak. May 14 00:03:09.080322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:03:09.283887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:03:09.290069 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:03:09.330434 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:09.330434 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:03:09.330434 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:09.330828 kubelet[2670]: I0514 00:03:09.330486 2670 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:03:09.337252 kubelet[2670]: I0514 00:03:09.337053 2670 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:03:09.337252 kubelet[2670]: I0514 00:03:09.337076 2670 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:03:09.337454 kubelet[2670]: I0514 00:03:09.337421 2670 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:03:09.339028 kubelet[2670]: I0514 00:03:09.339006 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:03:09.341290 kubelet[2670]: I0514 00:03:09.340779 2670 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:03:09.344121 kubelet[2670]: I0514 00:03:09.344102 2670 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:03:09.349013 kubelet[2670]: I0514 00:03:09.348975 2670 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:03:09.349122 kubelet[2670]: I0514 00:03:09.349103 2670 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:03:09.349324 kubelet[2670]: I0514 00:03:09.349290 2670 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:03:09.349475 kubelet[2670]: I0514 00:03:09.349321 2670 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:03:09.349565 kubelet[2670]: I0514 00:03:09.349479 2670 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:03:09.349565 kubelet[2670]: I0514 00:03:09.349489 2670 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:03:09.349565 kubelet[2670]: I0514 00:03:09.349517 2670 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:09.349646 kubelet[2670]: I0514 00:03:09.349635 2670 kubelet.go:408] "Attempting to sync node with API server" May 14 00:03:09.349676 kubelet[2670]: I0514 00:03:09.349649 2670 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:03:09.349704 kubelet[2670]: I0514 00:03:09.349676 2670 kubelet.go:314] "Adding apiserver pod source" May 14 00:03:09.349704 kubelet[2670]: I0514 00:03:09.349696 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:03:09.350474 kubelet[2670]: I0514 00:03:09.350445 2670 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:03:09.350860 kubelet[2670]: I0514 00:03:09.350845 2670 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:03:09.351315 kubelet[2670]: I0514 00:03:09.351297 2670 server.go:1269] "Started kubelet" May 14 00:03:09.351713 kubelet[2670]: I0514 00:03:09.351595 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:03:09.352349 kubelet[2670]: I0514 00:03:09.352322 2670 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:03:09.353499 kubelet[2670]: I0514 00:03:09.353461 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:03:09.353693 kubelet[2670]: I0514 00:03:09.353670 2670 server.go:460] "Adding debug handlers to kubelet server" May 14 00:03:09.353785 kubelet[2670]: I0514 00:03:09.353768 2670 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:03:09.355822 kubelet[2670]: I0514 00:03:09.355794 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:03:09.358229 kubelet[2670]: I0514 00:03:09.357882 2670 factory.go:221] Registration of the systemd container factory successfully May 14 00:03:09.359552 kubelet[2670]: I0514 00:03:09.359529 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:03:09.361930 kubelet[2670]: I0514 00:03:09.361904 2670 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:03:09.362173 kubelet[2670]: E0514 00:03:09.362148 2670 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:09.365176 kubelet[2670]: I0514 00:03:09.364161 2670 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:03:09.365176 kubelet[2670]: I0514 00:03:09.364358 2670 reconciler.go:26] "Reconciler: start to sync state" May 14 00:03:09.368659 kubelet[2670]: E0514 00:03:09.367880 2670 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:03:09.369373 kubelet[2670]: I0514 00:03:09.369359 2670 factory.go:221] Registration of the containerd container factory successfully May 14 00:03:09.372100 kubelet[2670]: I0514 00:03:09.372057 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:03:09.373311 kubelet[2670]: I0514 00:03:09.373281 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:03:09.373359 kubelet[2670]: I0514 00:03:09.373327 2670 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:03:09.373359 kubelet[2670]: I0514 00:03:09.373355 2670 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:03:09.373447 kubelet[2670]: E0514 00:03:09.373410 2670 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:03:09.406090 kubelet[2670]: I0514 00:03:09.406021 2670 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:03:09.406090 kubelet[2670]: I0514 00:03:09.406039 2670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:03:09.406090 kubelet[2670]: I0514 00:03:09.406058 2670 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:09.406367 kubelet[2670]: I0514 00:03:09.406306 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:03:09.406367 kubelet[2670]: I0514 00:03:09.406319 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:03:09.406367 kubelet[2670]: I0514 00:03:09.406340 2670 policy_none.go:49] "None policy: Start" May 14 00:03:09.406894 kubelet[2670]: I0514 00:03:09.406876 2670 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:03:09.406942 kubelet[2670]: I0514 00:03:09.406901 2670 state_mem.go:35] "Initializing new in-memory state store" May 14 00:03:09.407068 kubelet[2670]: I0514 00:03:09.407055 2670 state_mem.go:75] "Updated machine memory state" May 14 00:03:09.411752 kubelet[2670]: I0514 00:03:09.411716 2670 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:03:09.412148 kubelet[2670]: I0514 00:03:09.412003 2670 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:03:09.412148 kubelet[2670]: I0514 00:03:09.412024 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:03:09.412435 kubelet[2670]: I0514 00:03:09.412250 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:03:09.522573 kubelet[2670]: I0514 00:03:09.522501 2670 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:03:09.566237 kubelet[2670]: I0514 00:03:09.566035 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:03:09.566237 kubelet[2670]: I0514 00:03:09.566095 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:09.566237 kubelet[2670]: I0514 00:03:09.566122 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:09.566237 kubelet[2670]: I0514 00:03:09.566154 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:09.566237 kubelet[2670]: I0514 00:03:09.566223 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:09.566494 kubelet[2670]: I0514 00:03:09.566251 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:09.566494 kubelet[2670]: I0514 00:03:09.566274 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a90c40cad0f2e426cb63d3b6b3f39408-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a90c40cad0f2e426cb63d3b6b3f39408\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:09.566494 kubelet[2670]: I0514 00:03:09.566294 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:09.566494 kubelet[2670]: I0514 00:03:09.566310 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:09.676076 kubelet[2670]: I0514 00:03:09.676036 2670 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 00:03:09.676248 kubelet[2670]: I0514 00:03:09.676164 2670 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:03:09.884670 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:03:09.885025 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 00:03:09.943751 kubelet[2670]: E0514 00:03:09.943608 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:09.944080 kubelet[2670]: E0514 00:03:09.944048 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:09.944747 kubelet[2670]: E0514 00:03:09.944546 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:10.350996 kubelet[2670]: I0514 00:03:10.350833 2670 apiserver.go:52] "Watching apiserver" May 14 00:03:10.376967 kubelet[2670]: I0514 00:03:10.376911 2670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:03:10.389425 kubelet[2670]: E0514 00:03:10.388793 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:10.389425 kubelet[2670]: E0514 00:03:10.388889 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:10.401307 kubelet[2670]: E0514 00:03:10.400708 2670 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:03:10.401307 kubelet[2670]: E0514 00:03:10.400940 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:10.442229 kubelet[2670]: I0514 00:03:10.442049 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.442028063 podStartE2EDuration="1.442028063s" podCreationTimestamp="2025-05-14 00:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:10.441827795 +0000 UTC m=+1.147256554" watchObservedRunningTime="2025-05-14 00:03:10.442028063 +0000 UTC m=+1.147456822" May 14 00:03:10.442433 kubelet[2670]: I0514 00:03:10.442333 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.442325985 podStartE2EDuration="1.442325985s" podCreationTimestamp="2025-05-14 00:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:10.414079542 +0000 UTC m=+1.119508301" watchObservedRunningTime="2025-05-14 00:03:10.442325985 +0000 UTC m=+1.147754744" May 14 00:03:10.463418 sudo[2705]: pam_unix(sudo:session): session closed for user root May 14 00:03:10.465462 kubelet[2670]: I0514 00:03:10.465391 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.465374394 podStartE2EDuration="1.465374394s" podCreationTimestamp="2025-05-14 00:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:10.452610096 +0000 UTC m=+1.158038855" watchObservedRunningTime="2025-05-14 00:03:10.465374394 +0000 UTC m=+1.170803153" May 14 00:03:11.389968 kubelet[2670]: E0514 00:03:11.389921 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:11.389968 kubelet[2670]: E0514 00:03:11.389940 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:12.102993 sudo[1721]: pam_unix(sudo:session): session closed for user root May 14 00:03:12.105411 sshd[1720]: Connection closed by 10.0.0.1 port 40672 May 14 00:03:12.109322 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 14 00:03:12.115456 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:40672.service: Deactivated successfully. May 14 00:03:12.118034 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:03:12.118279 systemd[1]: session-7.scope: Consumed 5.172s CPU time, 258.4M memory peak. May 14 00:03:12.119505 systemd-logind[1502]: Session 7 logged out. Waiting for processes to exit. May 14 00:03:12.120551 systemd-logind[1502]: Removed session 7. May 14 00:03:13.026939 kubelet[2670]: E0514 00:03:13.026886 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:13.095405 kubelet[2670]: I0514 00:03:13.095362 2670 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:03:13.095887 containerd[1521]: time="2025-05-14T00:03:13.095833969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:03:13.096325 kubelet[2670]: I0514 00:03:13.096018 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:03:13.393091 kubelet[2670]: E0514 00:03:13.393046 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:13.918104 systemd[1]: Created slice kubepods-besteffort-podd91561d9_b635_43a2_bc49_171189a6046d.slice - libcontainer container kubepods-besteffort-podd91561d9_b635_43a2_bc49_171189a6046d.slice. May 14 00:03:13.933454 systemd[1]: Created slice kubepods-burstable-pod4956121a_a340_4510_a7ef_1f247bd5ebf1.slice - libcontainer container kubepods-burstable-pod4956121a_a340_4510_a7ef_1f247bd5ebf1.slice. May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001545 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d91561d9-b635-43a2-bc49-171189a6046d-kube-proxy\") pod \"kube-proxy-dg6b6\" (UID: \"d91561d9-b635-43a2-bc49-171189a6046d\") " pod="kube-system/kube-proxy-dg6b6" May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001596 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-etc-cni-netd\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001626 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-xtables-lock\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001649 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4956121a-a340-4510-a7ef-1f247bd5ebf1-clustermesh-secrets\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001668 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d91561d9-b635-43a2-bc49-171189a6046d-xtables-lock\") pod \"kube-proxy-dg6b6\" (UID: \"d91561d9-b635-43a2-bc49-171189a6046d\") " pod="kube-system/kube-proxy-dg6b6" May 14 00:03:14.001751 kubelet[2670]: I0514 00:03:14.001685 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cni-path\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001704 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-lib-modules\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001724 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-run\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001743 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-cgroup\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001761 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-kernel\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001777 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwwv\" (UniqueName: \"kubernetes.io/projected/d91561d9-b635-43a2-bc49-171189a6046d-kube-api-access-2lwwv\") pod \"kube-proxy-dg6b6\" (UID: \"d91561d9-b635-43a2-bc49-171189a6046d\") " pod="kube-system/kube-proxy-dg6b6" May 14 00:03:14.002022 kubelet[2670]: I0514 00:03:14.001795 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-hubble-tls\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001811 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d91561d9-b635-43a2-bc49-171189a6046d-lib-modules\") pod \"kube-proxy-dg6b6\" (UID: \"d91561d9-b635-43a2-bc49-171189a6046d\") " pod="kube-system/kube-proxy-dg6b6" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001835 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-bpf-maps\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001856 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-hostproc\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001884 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-config-path\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001915 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-net\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.002172 kubelet[2670]: I0514 00:03:14.001940 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5qn\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-kube-api-access-fq5qn\") pod \"cilium-pfrsr\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " pod="kube-system/cilium-pfrsr" May 14 00:03:14.009773 systemd[1]: Created slice kubepods-besteffort-pod3180526a_450e_4fa4_aa2e_9d5d13c41558.slice - libcontainer container kubepods-besteffort-pod3180526a_450e_4fa4_aa2e_9d5d13c41558.slice. May 14 00:03:14.103248 kubelet[2670]: I0514 00:03:14.103160 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3180526a-450e-4fa4-aa2e-9d5d13c41558-cilium-config-path\") pod \"cilium-operator-5d85765b45-j8bff\" (UID: \"3180526a-450e-4fa4-aa2e-9d5d13c41558\") " pod="kube-system/cilium-operator-5d85765b45-j8bff" May 14 00:03:14.103754 kubelet[2670]: I0514 00:03:14.103322 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5b6\" (UniqueName: \"kubernetes.io/projected/3180526a-450e-4fa4-aa2e-9d5d13c41558-kube-api-access-nf5b6\") pod \"cilium-operator-5d85765b45-j8bff\" (UID: \"3180526a-450e-4fa4-aa2e-9d5d13c41558\") " pod="kube-system/cilium-operator-5d85765b45-j8bff" May 14 00:03:14.231779 kubelet[2670]: E0514 00:03:14.231667 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:14.232273 containerd[1521]: time="2025-05-14T00:03:14.232232002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg6b6,Uid:d91561d9-b635-43a2-bc49-171189a6046d,Namespace:kube-system,Attempt:0,}" May 14 00:03:14.237068 kubelet[2670]: E0514 00:03:14.237033 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:14.237499 containerd[1521]: time="2025-05-14T00:03:14.237465408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pfrsr,Uid:4956121a-a340-4510-a7ef-1f247bd5ebf1,Namespace:kube-system,Attempt:0,}" May 14 00:03:14.312717 kubelet[2670]: E0514 00:03:14.312662 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:14.313418 containerd[1521]: time="2025-05-14T00:03:14.313272912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j8bff,Uid:3180526a-450e-4fa4-aa2e-9d5d13c41558,Namespace:kube-system,Attempt:0,}" May 14 00:03:15.197444 containerd[1521]: time="2025-05-14T00:03:15.197393210Z" level=info msg="connecting to shim 6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:15.232377 systemd[1]: Started cri-containerd-6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041.scope - libcontainer container 6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041. May 14 00:03:15.318959 containerd[1521]: time="2025-05-14T00:03:15.318902218Z" level=info msg="connecting to shim f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3" address="unix:///run/containerd/s/a334bfc2cb55bb406e2413e901c995d9c322d49a80ae31bca23dca58acb5a18b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:15.329113 containerd[1521]: time="2025-05-14T00:03:15.329073068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pfrsr,Uid:4956121a-a340-4510-a7ef-1f247bd5ebf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\"" May 14 00:03:15.330103 kubelet[2670]: E0514 00:03:15.330064 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:15.331979 containerd[1521]: time="2025-05-14T00:03:15.331838205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:03:15.347471 systemd[1]: Started cri-containerd-f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3.scope - libcontainer container f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3. May 14 00:03:15.365357 containerd[1521]: time="2025-05-14T00:03:15.365306823Z" level=info msg="connecting to shim afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910" address="unix:///run/containerd/s/091efdee5e3973f0c196fe735248646237bb3ad5aa62b041b6b9cf1c8e63c3b9" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:15.397375 systemd[1]: Started cri-containerd-afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910.scope - libcontainer container afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910. May 14 00:03:15.425732 containerd[1521]: time="2025-05-14T00:03:15.425664639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg6b6,Uid:d91561d9-b635-43a2-bc49-171189a6046d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3\"" May 14 00:03:15.426552 kubelet[2670]: E0514 00:03:15.426523 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:15.428901 containerd[1521]: time="2025-05-14T00:03:15.428868742Z" level=info msg="CreateContainer within sandbox \"f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:03:15.550843 containerd[1521]: time="2025-05-14T00:03:15.550702281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j8bff,Uid:3180526a-450e-4fa4-aa2e-9d5d13c41558,Namespace:kube-system,Attempt:0,} returns sandbox id \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\"" May 14 00:03:15.551669 kubelet[2670]: E0514 00:03:15.551628 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:15.664179 containerd[1521]: time="2025-05-14T00:03:15.664110897Z" level=info msg="Container 177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:15.707124 containerd[1521]: time="2025-05-14T00:03:15.705315781Z" level=info msg="CreateContainer within sandbox \"f9190c632c78a8f886a54877fed616ebf07a7831dc5902a3afee0dbd50668de3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744\"" May 14 00:03:15.707604 containerd[1521]: time="2025-05-14T00:03:15.707561259Z" level=info msg="StartContainer for \"177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744\"" May 14 00:03:15.708992 containerd[1521]: time="2025-05-14T00:03:15.708958409Z" level=info msg="connecting to shim 177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744" address="unix:///run/containerd/s/a334bfc2cb55bb406e2413e901c995d9c322d49a80ae31bca23dca58acb5a18b" protocol=ttrpc version=3 May 14 00:03:15.738534 systemd[1]: Started cri-containerd-177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744.scope - libcontainer container 177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744. May 14 00:03:15.836027 containerd[1521]: time="2025-05-14T00:03:15.835672677Z" level=info msg="StartContainer for \"177ec8e23ab7efa98233e8c8e59012ea60e5ef058e2793da53ef06db6c5d2744\" returns successfully" May 14 00:03:16.401557 kubelet[2670]: E0514 00:03:16.401528 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:19.730317 kubelet[2670]: E0514 00:03:19.730158 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:19.752396 kubelet[2670]: I0514 00:03:19.752287 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dg6b6" podStartSLOduration=6.752263704 podStartE2EDuration="6.752263704s" podCreationTimestamp="2025-05-14 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:16.4112976 +0000 UTC m=+7.116726379" watchObservedRunningTime="2025-05-14 00:03:19.752263704 +0000 UTC m=+10.457692473" May 14 00:03:20.012248 kubelet[2670]: E0514 00:03:20.011942 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:20.429307 kubelet[2670]: E0514 00:03:20.429039 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:21.430551 kubelet[2670]: E0514 00:03:21.430503 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:25.190231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075441214.mount: Deactivated successfully. May 14 00:03:35.169195 containerd[1521]: time="2025-05-14T00:03:35.166708185Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:35.228741 containerd[1521]: time="2025-05-14T00:03:35.228639443Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 00:03:35.321153 containerd[1521]: time="2025-05-14T00:03:35.321062574Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:35.322630 containerd[1521]: time="2025-05-14T00:03:35.322573631Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.990671036s" May 14 00:03:35.322630 containerd[1521]: time="2025-05-14T00:03:35.322622132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 00:03:35.361087 containerd[1521]: time="2025-05-14T00:03:35.361009912Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:03:35.463101 containerd[1521]: time="2025-05-14T00:03:35.462933584Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:03:35.889537 containerd[1521]: time="2025-05-14T00:03:35.888261215Z" level=info msg="Container fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:38.580417 containerd[1521]: time="2025-05-14T00:03:38.580299507Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\"" May 14 00:03:38.584105 containerd[1521]: time="2025-05-14T00:03:38.584053534Z" level=info msg="StartContainer for \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\"" May 14 00:03:38.585136 containerd[1521]: time="2025-05-14T00:03:38.585081254Z" level=info msg="connecting to shim fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" protocol=ttrpc version=3 May 14 00:03:38.644396 systemd[1]: Started cri-containerd-fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7.scope - libcontainer container fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7. May 14 00:03:38.746912 systemd[1]: cri-containerd-fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7.scope: Deactivated successfully. May 14 00:03:38.749494 containerd[1521]: time="2025-05-14T00:03:38.749444630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" id:\"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" pid:3088 exited_at:{seconds:1747181018 nanos:748887274}" May 14 00:03:39.207998 containerd[1521]: time="2025-05-14T00:03:39.207947584Z" level=info msg="received exit event container_id:\"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" id:\"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" pid:3088 exited_at:{seconds:1747181018 nanos:748887274}" May 14 00:03:39.208839 containerd[1521]: time="2025-05-14T00:03:39.208815002Z" level=info msg="StartContainer for \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" returns successfully" May 14 00:03:39.229761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7-rootfs.mount: Deactivated successfully. May 14 00:03:39.477791 kubelet[2670]: E0514 00:03:39.477650 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:40.624746 kubelet[2670]: E0514 00:03:40.624710 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:40.626529 containerd[1521]: time="2025-05-14T00:03:40.626493283Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:03:41.476155 containerd[1521]: time="2025-05-14T00:03:41.476100434Z" level=info msg="Container 440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:42.136734 containerd[1521]: time="2025-05-14T00:03:42.136672722Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\"" May 14 00:03:42.137195 containerd[1521]: time="2025-05-14T00:03:42.137175075Z" level=info msg="StartContainer for \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\"" May 14 00:03:42.138268 containerd[1521]: time="2025-05-14T00:03:42.138241426Z" level=info msg="connecting to shim 440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" protocol=ttrpc version=3 May 14 00:03:42.164430 systemd[1]: Started cri-containerd-440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390.scope - libcontainer container 440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390. May 14 00:03:42.207070 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:03:42.207622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:03:42.207819 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 00:03:42.210086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:03:42.211452 containerd[1521]: time="2025-05-14T00:03:42.211420691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" id:\"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" pid:3133 exited_at:{seconds:1747181022 nanos:211124536}" May 14 00:03:42.213108 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:03:42.213672 systemd[1]: cri-containerd-440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390.scope: Deactivated successfully. May 14 00:03:42.319870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:03:42.389552 containerd[1521]: time="2025-05-14T00:03:42.389425412Z" level=info msg="received exit event container_id:\"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" id:\"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" pid:3133 exited_at:{seconds:1747181022 nanos:211124536}" May 14 00:03:42.390831 containerd[1521]: time="2025-05-14T00:03:42.390752632Z" level=info msg="StartContainer for \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" returns successfully" May 14 00:03:42.477893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390-rootfs.mount: Deactivated successfully. May 14 00:03:42.629596 kubelet[2670]: E0514 00:03:42.629564 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:42.634082 containerd[1521]: time="2025-05-14T00:03:42.633788093Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:03:43.140366 containerd[1521]: time="2025-05-14T00:03:43.140315448Z" level=info msg="Container fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:43.562411 containerd[1521]: time="2025-05-14T00:03:43.562240865Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\"" May 14 00:03:43.563233 containerd[1521]: time="2025-05-14T00:03:43.563147005Z" level=info msg="StartContainer for \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\"" May 14 00:03:43.565714 containerd[1521]: time="2025-05-14T00:03:43.565646747Z" level=info msg="connecting to shim fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" protocol=ttrpc version=3 May 14 00:03:43.591168 systemd[1]: Started cri-containerd-fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd.scope - libcontainer container fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd. May 14 00:03:43.634039 systemd[1]: cri-containerd-fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd.scope: Deactivated successfully. May 14 00:03:43.635064 containerd[1521]: time="2025-05-14T00:03:43.635029222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" id:\"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" pid:3181 exited_at:{seconds:1747181023 nanos:634736292}" May 14 00:03:44.010239 containerd[1521]: time="2025-05-14T00:03:44.010110308Z" level=info msg="received exit event container_id:\"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" id:\"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" pid:3181 exited_at:{seconds:1747181023 nanos:634736292}" May 14 00:03:44.012657 containerd[1521]: time="2025-05-14T00:03:44.012594790Z" level=info msg="StartContainer for \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" returns successfully" May 14 00:03:44.032791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd-rootfs.mount: Deactivated successfully. May 14 00:03:45.020497 kubelet[2670]: E0514 00:03:45.020466 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:45.023530 containerd[1521]: time="2025-05-14T00:03:45.022174687Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:03:46.438799 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:56404.service - OpenSSH per-connection server daemon (10.0.0.1:56404). May 14 00:03:46.860261 sshd[3212]: Accepted publickey for core from 10.0.0.1 port 56404 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:03:46.862433 sshd-session[3212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:46.867565 systemd-logind[1502]: New session 8 of user core. May 14 00:03:46.877407 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:03:47.311988 sshd[3214]: Connection closed by 10.0.0.1 port 56404 May 14 00:03:47.312367 sshd-session[3212]: pam_unix(sshd:session): session closed for user core May 14 00:03:47.315508 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:56404.service: Deactivated successfully. May 14 00:03:47.317764 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:03:47.319559 systemd-logind[1502]: Session 8 logged out. Waiting for processes to exit. May 14 00:03:47.320574 systemd-logind[1502]: Removed session 8. May 14 00:03:47.384461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047927873.mount: Deactivated successfully. May 14 00:03:47.671415 containerd[1521]: time="2025-05-14T00:03:47.671346708Z" level=info msg="Container 095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:47.675353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889078060.mount: Deactivated successfully. May 14 00:03:48.477293 containerd[1521]: time="2025-05-14T00:03:48.477192427Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\"" May 14 00:03:48.477827 containerd[1521]: time="2025-05-14T00:03:48.477791520Z" level=info msg="StartContainer for \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\"" May 14 00:03:48.478919 containerd[1521]: time="2025-05-14T00:03:48.478883520Z" level=info msg="connecting to shim 095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" protocol=ttrpc version=3 May 14 00:03:48.498434 systemd[1]: Started cri-containerd-095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57.scope - libcontainer container 095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57. May 14 00:03:48.559346 systemd[1]: cri-containerd-095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57.scope: Deactivated successfully. May 14 00:03:48.560312 containerd[1521]: time="2025-05-14T00:03:48.560181699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" id:\"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" pid:3243 exited_at:{seconds:1747181028 nanos:559786473}" May 14 00:03:48.758243 containerd[1521]: time="2025-05-14T00:03:48.758009351Z" level=info msg="received exit event container_id:\"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" id:\"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" pid:3243 exited_at:{seconds:1747181028 nanos:559786473}" May 14 00:03:48.760267 containerd[1521]: time="2025-05-14T00:03:48.760194493Z" level=info msg="StartContainer for \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" returns successfully" May 14 00:03:48.783332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57-rootfs.mount: Deactivated successfully. May 14 00:03:49.353982 kubelet[2670]: E0514 00:03:49.353930 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:49.358865 containerd[1521]: time="2025-05-14T00:03:49.358820613Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:03:49.901534 containerd[1521]: time="2025-05-14T00:03:49.901480180Z" level=info msg="Container 63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:50.953987 containerd[1521]: time="2025-05-14T00:03:50.953934762Z" level=info msg="CreateContainer within sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\"" May 14 00:03:50.968374 containerd[1521]: time="2025-05-14T00:03:50.968299967Z" level=info msg="StartContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\"" May 14 00:03:50.969682 containerd[1521]: time="2025-05-14T00:03:50.969640525Z" level=info msg="connecting to shim 63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523" address="unix:///run/containerd/s/8195f3b281d9e59a0b51b4a96d91c97f7c4f0c0cb21433fea65395ff45bf31fb" protocol=ttrpc version=3 May 14 00:03:50.995368 systemd[1]: Started cri-containerd-63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523.scope - libcontainer container 63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523. May 14 00:03:51.156726 containerd[1521]: time="2025-05-14T00:03:51.156658857Z" level=info msg="StartContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" returns successfully" May 14 00:03:51.230155 containerd[1521]: time="2025-05-14T00:03:51.229883897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" id:\"3c59dfa3f6bf4e1907667005884c7c621b5e20117987a51ca20a337d46602c55\" pid:3332 exited_at:{seconds:1747181031 nanos:229160156}" May 14 00:03:51.269757 kubelet[2670]: I0514 00:03:51.269719 2670 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 00:03:51.275197 kubelet[2670]: E0514 00:03:51.275160 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:51.544586 kubelet[2670]: I0514 00:03:51.544403 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pfrsr" podStartSLOduration=18.515836745 podStartE2EDuration="38.544385346s" podCreationTimestamp="2025-05-14 00:03:13 +0000 UTC" firstStartedPulling="2025-05-14 00:03:15.331309339 +0000 UTC m=+6.036738098" lastFinishedPulling="2025-05-14 00:03:35.35985794 +0000 UTC m=+26.065286699" observedRunningTime="2025-05-14 00:03:51.54405673 +0000 UTC m=+42.249485509" watchObservedRunningTime="2025-05-14 00:03:51.544385346 +0000 UTC m=+42.249814115" May 14 00:03:51.582705 systemd[1]: Created slice kubepods-burstable-pod37fd8940_e2e6_478d_979d_b62f828025eb.slice - libcontainer container kubepods-burstable-pod37fd8940_e2e6_478d_979d_b62f828025eb.slice. May 14 00:03:51.589398 systemd[1]: Created slice kubepods-burstable-podde023973_5652_4639_8850_2888b8bfc611.slice - libcontainer container kubepods-burstable-podde023973_5652_4639_8850_2888b8bfc611.slice. May 14 00:03:51.627413 containerd[1521]: time="2025-05-14T00:03:51.627344701Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:51.677737 containerd[1521]: time="2025-05-14T00:03:51.677652114Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 00:03:51.707277 containerd[1521]: time="2025-05-14T00:03:51.707055327Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:51.708281 containerd[1521]: time="2025-05-14T00:03:51.708191417Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 16.347127683s" May 14 00:03:51.708281 containerd[1521]: time="2025-05-14T00:03:51.708262675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 00:03:51.711684 containerd[1521]: time="2025-05-14T00:03:51.711645854Z" level=info msg="CreateContainer within sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:03:51.738781 kubelet[2670]: I0514 00:03:51.738740 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw79z\" (UniqueName: \"kubernetes.io/projected/de023973-5652-4639-8850-2888b8bfc611-kube-api-access-tw79z\") pod \"coredns-6f6b679f8f-q9wrx\" (UID: \"de023973-5652-4639-8850-2888b8bfc611\") " pod="kube-system/coredns-6f6b679f8f-q9wrx" May 14 00:03:51.738781 kubelet[2670]: I0514 00:03:51.738786 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de023973-5652-4639-8850-2888b8bfc611-config-volume\") pod \"coredns-6f6b679f8f-q9wrx\" (UID: \"de023973-5652-4639-8850-2888b8bfc611\") " pod="kube-system/coredns-6f6b679f8f-q9wrx" May 14 00:03:51.738959 kubelet[2670]: I0514 00:03:51.738807 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdkj\" (UniqueName: \"kubernetes.io/projected/37fd8940-e2e6-478d-979d-b62f828025eb-kube-api-access-7cdkj\") pod \"coredns-6f6b679f8f-gkfhv\" (UID: \"37fd8940-e2e6-478d-979d-b62f828025eb\") " pod="kube-system/coredns-6f6b679f8f-gkfhv" May 14 00:03:51.738959 kubelet[2670]: I0514 00:03:51.738827 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fd8940-e2e6-478d-979d-b62f828025eb-config-volume\") pod \"coredns-6f6b679f8f-gkfhv\" (UID: \"37fd8940-e2e6-478d-979d-b62f828025eb\") " pod="kube-system/coredns-6f6b679f8f-gkfhv" May 14 00:03:51.820399 containerd[1521]: time="2025-05-14T00:03:51.819592573Z" level=info msg="Container 39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:51.921747 containerd[1521]: time="2025-05-14T00:03:51.921695580Z" level=info msg="CreateContainer within sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\"" May 14 00:03:51.922250 containerd[1521]: time="2025-05-14T00:03:51.922230937Z" level=info msg="StartContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\"" May 14 00:03:51.923059 containerd[1521]: time="2025-05-14T00:03:51.923035213Z" level=info msg="connecting to shim 39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2" address="unix:///run/containerd/s/091efdee5e3973f0c196fe735248646237bb3ad5aa62b041b6b9cf1c8e63c3b9" protocol=ttrpc version=3 May 14 00:03:51.946479 systemd[1]: Started cri-containerd-39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2.scope - libcontainer container 39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2. May 14 00:03:52.105238 containerd[1521]: time="2025-05-14T00:03:52.105088628Z" level=info msg="StartContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" returns successfully" May 14 00:03:52.186098 kubelet[2670]: E0514 00:03:52.186055 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:52.189678 containerd[1521]: time="2025-05-14T00:03:52.189626831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gkfhv,Uid:37fd8940-e2e6-478d-979d-b62f828025eb,Namespace:kube-system,Attempt:0,}" May 14 00:03:52.194950 kubelet[2670]: E0514 00:03:52.194915 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:52.195471 containerd[1521]: time="2025-05-14T00:03:52.195422200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9wrx,Uid:de023973-5652-4639-8850-2888b8bfc611,Namespace:kube-system,Attempt:0,}" May 14 00:03:52.283732 kubelet[2670]: E0514 00:03:52.281310 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:52.283732 kubelet[2670]: E0514 00:03:52.281423 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:52.331171 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:38466.service - OpenSSH per-connection server daemon (10.0.0.1:38466). May 14 00:03:52.538084 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 38466 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:03:52.540033 sshd-session[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:52.544886 systemd-logind[1502]: New session 9 of user core. May 14 00:03:52.551396 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:03:52.918537 sshd[3441]: Connection closed by 10.0.0.1 port 38466 May 14 00:03:52.919490 sshd-session[3428]: pam_unix(sshd:session): session closed for user core May 14 00:03:52.926536 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:38466.service: Deactivated successfully. May 14 00:03:52.930100 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:03:52.932910 systemd-logind[1502]: Session 9 logged out. Waiting for processes to exit. May 14 00:03:52.936299 systemd-logind[1502]: Removed session 9. May 14 00:03:53.282786 kubelet[2670]: E0514 00:03:53.282639 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:53.282786 kubelet[2670]: E0514 00:03:53.282718 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:55.557788 systemd-networkd[1444]: cilium_host: Link UP May 14 00:03:55.557957 systemd-networkd[1444]: cilium_net: Link UP May 14 00:03:55.558141 systemd-networkd[1444]: cilium_net: Gained carrier May 14 00:03:55.558342 systemd-networkd[1444]: cilium_host: Gained carrier May 14 00:03:55.687580 systemd-networkd[1444]: cilium_vxlan: Link UP May 14 00:03:55.687592 systemd-networkd[1444]: cilium_vxlan: Gained carrier May 14 00:03:55.913244 kernel: NET: Registered PF_ALG protocol family May 14 00:03:56.305473 systemd-networkd[1444]: cilium_host: Gained IPv6LL May 14 00:03:56.369337 systemd-networkd[1444]: cilium_net: Gained IPv6LL May 14 00:03:56.617966 systemd-networkd[1444]: lxc_health: Link UP May 14 00:03:56.618757 systemd-networkd[1444]: lxc_health: Gained carrier May 14 00:03:56.986249 kernel: eth0: renamed from tmp889de May 14 00:03:56.993353 systemd-networkd[1444]: lxceaa56e77d68c: Link UP May 14 00:03:56.994498 systemd-networkd[1444]: lxceaa56e77d68c: Gained carrier May 14 00:03:57.137368 systemd-networkd[1444]: cilium_vxlan: Gained IPv6LL May 14 00:03:57.170382 systemd-networkd[1444]: lxc7ed06209242d: Link UP May 14 00:03:57.183267 kernel: eth0: renamed from tmp207f6 May 14 00:03:57.191586 systemd-networkd[1444]: lxc7ed06209242d: Gained carrier May 14 00:03:57.649375 systemd-networkd[1444]: lxc_health: Gained IPv6LL May 14 00:03:57.933413 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). May 14 00:03:58.003706 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:03:58.006301 sshd-session[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:58.015950 systemd-logind[1502]: New session 10 of user core. May 14 00:03:58.025532 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:03:58.167596 sshd[3844]: Connection closed by 10.0.0.1 port 38476 May 14 00:03:58.171303 sshd-session[3842]: pam_unix(sshd:session): session closed for user core May 14 00:03:58.181058 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:38476.service: Deactivated successfully. May 14 00:03:58.183724 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:03:58.184968 systemd-logind[1502]: Session 10 logged out. Waiting for processes to exit. May 14 00:03:58.186053 systemd-logind[1502]: Removed session 10. May 14 00:03:58.239279 kubelet[2670]: E0514 00:03:58.239113 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:58.286793 kubelet[2670]: I0514 00:03:58.286641 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j8bff" podStartSLOduration=9.128843912 podStartE2EDuration="45.286615702s" podCreationTimestamp="2025-05-14 00:03:13 +0000 UTC" firstStartedPulling="2025-05-14 00:03:15.552145258 +0000 UTC m=+6.257574017" lastFinishedPulling="2025-05-14 00:03:51.709917048 +0000 UTC m=+42.415345807" observedRunningTime="2025-05-14 00:03:52.682372847 +0000 UTC m=+43.387801606" watchObservedRunningTime="2025-05-14 00:03:58.286615702 +0000 UTC m=+48.992044461" May 14 00:03:58.293358 kubelet[2670]: E0514 00:03:58.293324 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:58.993403 systemd-networkd[1444]: lxceaa56e77d68c: Gained IPv6LL May 14 00:03:59.185410 systemd-networkd[1444]: lxc7ed06209242d: Gained IPv6LL May 14 00:03:59.295574 kubelet[2670]: E0514 00:03:59.295245 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:02.130702 containerd[1521]: time="2025-05-14T00:04:02.130494197Z" level=info msg="connecting to shim 889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988" address="unix:///run/containerd/s/9b124fb2cad01b20b77ece9b7381ef44c959cdc3b9be79859781b97c12bcf4fa" namespace=k8s.io protocol=ttrpc version=3 May 14 00:04:02.131831 containerd[1521]: time="2025-05-14T00:04:02.131666880Z" level=info msg="connecting to shim 207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238" address="unix:///run/containerd/s/43b78a3ec510072d967181057c95fc4f259aa9b55881c21389918b2619af4f99" namespace=k8s.io protocol=ttrpc version=3 May 14 00:04:02.169843 systemd[1]: Started cri-containerd-207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238.scope - libcontainer container 207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238. May 14 00:04:02.172308 systemd[1]: Started cri-containerd-889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988.scope - libcontainer container 889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988. May 14 00:04:02.190023 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:04:02.194276 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:04:02.568576 containerd[1521]: time="2025-05-14T00:04:02.566015126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9wrx,Uid:de023973-5652-4639-8850-2888b8bfc611,Namespace:kube-system,Attempt:0,} returns sandbox id \"207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238\"" May 14 00:04:02.568765 kubelet[2670]: E0514 00:04:02.567103 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:02.571551 containerd[1521]: time="2025-05-14T00:04:02.570860622Z" level=info msg="CreateContainer within sandbox \"207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:04:02.658898 containerd[1521]: time="2025-05-14T00:04:02.658252854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gkfhv,Uid:37fd8940-e2e6-478d-979d-b62f828025eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988\"" May 14 00:04:02.660163 kubelet[2670]: E0514 00:04:02.660126 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:02.662800 containerd[1521]: time="2025-05-14T00:04:02.662741214Z" level=info msg="CreateContainer within sandbox \"889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:04:03.199419 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:57494.service - OpenSSH per-connection server daemon (10.0.0.1:57494). May 14 00:04:03.207647 containerd[1521]: time="2025-05-14T00:04:03.207578386Z" level=info msg="Container a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547: CDI devices from CRI Config.CDIDevices: []" May 14 00:04:03.209631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460471052.mount: Deactivated successfully. May 14 00:04:03.262094 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 57494 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:03.264644 sshd-session[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:03.270821 systemd-logind[1502]: New session 11 of user core. May 14 00:04:03.276459 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:04:03.541126 sshd[3963]: Connection closed by 10.0.0.1 port 57494 May 14 00:04:03.541785 sshd-session[3961]: pam_unix(sshd:session): session closed for user core May 14 00:04:03.546588 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:57494.service: Deactivated successfully. May 14 00:04:03.548959 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:04:03.549700 systemd-logind[1502]: Session 11 logged out. Waiting for processes to exit. May 14 00:04:03.550765 systemd-logind[1502]: Removed session 11. May 14 00:04:03.733682 containerd[1521]: time="2025-05-14T00:04:03.733632726Z" level=info msg="CreateContainer within sandbox \"207f6f5ba251204512fb2db378d120b14ae75c80ceb072724f38d9ae194bf238\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547\"" May 14 00:04:03.816475 containerd[1521]: time="2025-05-14T00:04:03.734070757Z" level=info msg="StartContainer for \"a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547\"" May 14 00:04:03.816475 containerd[1521]: time="2025-05-14T00:04:03.734913895Z" level=info msg="connecting to shim a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547" address="unix:///run/containerd/s/43b78a3ec510072d967181057c95fc4f259aa9b55881c21389918b2619af4f99" protocol=ttrpc version=3 May 14 00:04:03.841500 systemd[1]: Started cri-containerd-a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547.scope - libcontainer container a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547. May 14 00:04:04.337485 containerd[1521]: time="2025-05-14T00:04:04.336864663Z" level=info msg="Container 6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f: CDI devices from CRI Config.CDIDevices: []" May 14 00:04:04.483952 containerd[1521]: time="2025-05-14T00:04:04.483904292Z" level=info msg="StartContainer for \"a68ee7a792c4f61064006dbf59acdf4975869360090ffabb49d8ddc9485c0547\" returns successfully" May 14 00:04:04.982739 containerd[1521]: time="2025-05-14T00:04:04.982677631Z" level=info msg="CreateContainer within sandbox \"889de133ad10ecb9e2176afa8e8b150639ab359cf28d02a24e6f1bc89160d988\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f\"" May 14 00:04:04.983316 containerd[1521]: time="2025-05-14T00:04:04.983282310Z" level=info msg="StartContainer for \"6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f\"" May 14 00:04:04.985043 containerd[1521]: time="2025-05-14T00:04:04.985019202Z" level=info msg="connecting to shim 6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f" address="unix:///run/containerd/s/9b124fb2cad01b20b77ece9b7381ef44c959cdc3b9be79859781b97c12bcf4fa" protocol=ttrpc version=3 May 14 00:04:05.010640 systemd[1]: Started cri-containerd-6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f.scope - libcontainer container 6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f. May 14 00:04:05.175102 containerd[1521]: time="2025-05-14T00:04:05.174200511Z" level=info msg="StartContainer for \"6c7a3309aab63772ea10a9e4378d20cc5e7854371b8ed93333d972e098254f5f\" returns successfully" May 14 00:04:05.489279 kubelet[2670]: E0514 00:04:05.489102 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:05.489894 kubelet[2670]: E0514 00:04:05.489518 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:06.234552 kubelet[2670]: I0514 00:04:06.234471 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q9wrx" podStartSLOduration=52.23445171 podStartE2EDuration="52.23445171s" podCreationTimestamp="2025-05-14 00:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:04:05.993076935 +0000 UTC m=+56.698505694" watchObservedRunningTime="2025-05-14 00:04:06.23445171 +0000 UTC m=+56.939880469" May 14 00:04:06.234844 kubelet[2670]: I0514 00:04:06.234645 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gkfhv" podStartSLOduration=53.234638709 podStartE2EDuration="53.234638709s" podCreationTimestamp="2025-05-14 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:04:06.234237671 +0000 UTC m=+56.939666440" watchObservedRunningTime="2025-05-14 00:04:06.234638709 +0000 UTC m=+56.940067478" May 14 00:04:06.492114 kubelet[2670]: E0514 00:04:06.491956 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:06.492639 kubelet[2670]: E0514 00:04:06.492237 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:07.495274 kubelet[2670]: E0514 00:04:07.495196 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:07.495889 kubelet[2670]: E0514 00:04:07.495837 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:08.497568 kubelet[2670]: E0514 00:04:08.497497 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:08.557304 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). May 14 00:04:08.655185 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:08.657595 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:08.665413 systemd-logind[1502]: New session 12 of user core. May 14 00:04:08.680571 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:04:08.897574 sshd[4057]: Connection closed by 10.0.0.1 port 48024 May 14 00:04:08.898391 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 14 00:04:08.902186 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:48024.service: Deactivated successfully. May 14 00:04:08.905124 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:04:08.910378 systemd-logind[1502]: Session 12 logged out. Waiting for processes to exit. May 14 00:04:08.912169 systemd-logind[1502]: Removed session 12. May 14 00:04:13.910014 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). May 14 00:04:13.965061 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:13.966723 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:13.971463 systemd-logind[1502]: New session 13 of user core. May 14 00:04:13.986364 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:04:14.107958 sshd[4079]: Connection closed by 10.0.0.1 port 48034 May 14 00:04:14.108342 sshd-session[4077]: pam_unix(sshd:session): session closed for user core May 14 00:04:14.113624 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:48034.service: Deactivated successfully. May 14 00:04:14.116165 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:04:14.116999 systemd-logind[1502]: Session 13 logged out. Waiting for processes to exit. May 14 00:04:14.118140 systemd-logind[1502]: Removed session 13. May 14 00:04:19.121197 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). May 14 00:04:19.171396 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:19.173577 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:19.178657 systemd-logind[1502]: New session 14 of user core. May 14 00:04:19.187437 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:04:19.339940 sshd[4098]: Connection closed by 10.0.0.1 port 36946 May 14 00:04:19.340305 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 14 00:04:19.344643 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:36946.service: Deactivated successfully. May 14 00:04:19.346794 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:04:19.347667 systemd-logind[1502]: Session 14 logged out. Waiting for processes to exit. May 14 00:04:19.348891 systemd-logind[1502]: Removed session 14. May 14 00:04:24.367335 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:36954.service - OpenSSH per-connection server daemon (10.0.0.1:36954). May 14 00:04:24.428861 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 36954 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:24.431330 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:24.440354 systemd-logind[1502]: New session 15 of user core. May 14 00:04:24.447532 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:04:24.573880 sshd[4115]: Connection closed by 10.0.0.1 port 36954 May 14 00:04:24.574302 sshd-session[4113]: pam_unix(sshd:session): session closed for user core May 14 00:04:24.588775 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:36954.service: Deactivated successfully. May 14 00:04:24.591539 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:04:24.595197 systemd-logind[1502]: Session 15 logged out. Waiting for processes to exit. May 14 00:04:24.596901 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:36964.service - OpenSSH per-connection server daemon (10.0.0.1:36964). May 14 00:04:24.598377 systemd-logind[1502]: Removed session 15. May 14 00:04:24.647932 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 36964 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:24.650069 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:24.655729 systemd-logind[1502]: New session 16 of user core. May 14 00:04:24.665574 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:04:24.872535 sshd[4131]: Connection closed by 10.0.0.1 port 36964 May 14 00:04:24.872955 sshd-session[4128]: pam_unix(sshd:session): session closed for user core May 14 00:04:24.892334 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:36964.service: Deactivated successfully. May 14 00:04:24.894588 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:04:24.896728 systemd-logind[1502]: Session 16 logged out. Waiting for processes to exit. May 14 00:04:24.901686 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:36980.service - OpenSSH per-connection server daemon (10.0.0.1:36980). May 14 00:04:24.904454 systemd-logind[1502]: Removed session 16. May 14 00:04:24.961785 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 36980 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:24.963291 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:24.968298 systemd-logind[1502]: New session 17 of user core. May 14 00:04:24.979413 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:04:25.185809 sshd[4145]: Connection closed by 10.0.0.1 port 36980 May 14 00:04:25.186064 sshd-session[4142]: pam_unix(sshd:session): session closed for user core May 14 00:04:25.191542 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:36980.service: Deactivated successfully. May 14 00:04:25.194019 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:04:25.194856 systemd-logind[1502]: Session 17 logged out. Waiting for processes to exit. May 14 00:04:25.195915 systemd-logind[1502]: Removed session 17. May 14 00:04:30.204004 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). May 14 00:04:30.260995 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:30.262904 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:30.269437 systemd-logind[1502]: New session 18 of user core. May 14 00:04:30.276564 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:04:30.397226 sshd[4162]: Connection closed by 10.0.0.1 port 59166 May 14 00:04:30.397603 sshd-session[4160]: pam_unix(sshd:session): session closed for user core May 14 00:04:30.403743 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:59166.service: Deactivated successfully. May 14 00:04:30.406280 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:04:30.407393 systemd-logind[1502]: Session 18 logged out. Waiting for processes to exit. May 14 00:04:30.409699 systemd-logind[1502]: Removed session 18. May 14 00:04:32.374697 kubelet[2670]: E0514 00:04:32.374644 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:34.374823 kubelet[2670]: E0514 00:04:34.374704 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:35.411034 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). May 14 00:04:35.469620 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:35.471913 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:35.477686 systemd-logind[1502]: New session 19 of user core. May 14 00:04:35.487441 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:04:35.603675 sshd[4178]: Connection closed by 10.0.0.1 port 59174 May 14 00:04:35.604086 sshd-session[4176]: pam_unix(sshd:session): session closed for user core May 14 00:04:35.608863 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:59174.service: Deactivated successfully. May 14 00:04:35.611374 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:04:35.612254 systemd-logind[1502]: Session 19 logged out. Waiting for processes to exit. May 14 00:04:35.613783 systemd-logind[1502]: Removed session 19. May 14 00:04:39.374252 kubelet[2670]: E0514 00:04:39.374151 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:40.619553 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:49120.service - OpenSSH per-connection server daemon (10.0.0.1:49120). May 14 00:04:40.680261 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 49120 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:40.682632 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:40.689506 systemd-logind[1502]: New session 20 of user core. May 14 00:04:40.700706 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:04:40.833021 sshd[4193]: Connection closed by 10.0.0.1 port 49120 May 14 00:04:40.833586 sshd-session[4191]: pam_unix(sshd:session): session closed for user core May 14 00:04:40.845907 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:49120.service: Deactivated successfully. May 14 00:04:40.849331 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:04:40.851493 systemd-logind[1502]: Session 20 logged out. Waiting for processes to exit. May 14 00:04:40.854189 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:49136.service - OpenSSH per-connection server daemon (10.0.0.1:49136). May 14 00:04:40.859660 systemd-logind[1502]: Removed session 20. May 14 00:04:40.908628 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 49136 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:40.910732 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:40.915552 systemd-logind[1502]: New session 21 of user core. May 14 00:04:40.925526 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:04:42.811056 sshd[4208]: Connection closed by 10.0.0.1 port 49136 May 14 00:04:42.812603 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 14 00:04:42.826270 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:49136.service: Deactivated successfully. May 14 00:04:42.828438 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:04:42.829364 systemd-logind[1502]: Session 21 logged out. Waiting for processes to exit. May 14 00:04:42.832097 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:49146.service - OpenSSH per-connection server daemon (10.0.0.1:49146). May 14 00:04:42.832803 systemd-logind[1502]: Removed session 21. May 14 00:04:42.891035 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 49146 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:42.892963 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:42.897712 systemd-logind[1502]: New session 22 of user core. May 14 00:04:42.910583 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:04:43.375102 kubelet[2670]: E0514 00:04:43.375009 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:44.627090 sshd[4221]: Connection closed by 10.0.0.1 port 49146 May 14 00:04:44.632389 sshd-session[4218]: pam_unix(sshd:session): session closed for user core May 14 00:04:44.647556 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:49156.service - OpenSSH per-connection server daemon (10.0.0.1:49156). May 14 00:04:44.648857 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:49146.service: Deactivated successfully. May 14 00:04:44.655310 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:04:44.655650 systemd[1]: session-22.scope: Consumed 696ms CPU time, 65.4M memory peak. May 14 00:04:44.657730 systemd-logind[1502]: Session 22 logged out. Waiting for processes to exit. May 14 00:04:44.659461 systemd-logind[1502]: Removed session 22. May 14 00:04:44.705152 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 49156 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:44.707006 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:44.714475 systemd-logind[1502]: New session 23 of user core. May 14 00:04:44.725619 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:04:45.173586 sshd[4242]: Connection closed by 10.0.0.1 port 49156 May 14 00:04:45.173936 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 14 00:04:45.188801 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:49156.service: Deactivated successfully. May 14 00:04:45.191358 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:04:45.193805 systemd-logind[1502]: Session 23 logged out. Waiting for processes to exit. May 14 00:04:45.195711 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:49172.service - OpenSSH per-connection server daemon (10.0.0.1:49172). May 14 00:04:45.198010 systemd-logind[1502]: Removed session 23. May 14 00:04:45.266739 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 49172 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:45.268274 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:45.278586 systemd-logind[1502]: New session 24 of user core. May 14 00:04:45.292707 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:04:45.481845 sshd[4256]: Connection closed by 10.0.0.1 port 49172 May 14 00:04:45.482221 sshd-session[4253]: pam_unix(sshd:session): session closed for user core May 14 00:04:45.487812 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:49172.service: Deactivated successfully. May 14 00:04:45.493837 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:04:45.497723 systemd-logind[1502]: Session 24 logged out. Waiting for processes to exit. May 14 00:04:45.500078 systemd-logind[1502]: Removed session 24. May 14 00:04:50.514044 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:47106.service - OpenSSH per-connection server daemon (10.0.0.1:47106). May 14 00:04:50.576245 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 47106 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:50.579054 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:50.590648 systemd-logind[1502]: New session 25 of user core. May 14 00:04:50.599154 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:04:50.790459 sshd[4274]: Connection closed by 10.0.0.1 port 47106 May 14 00:04:50.790852 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 14 00:04:50.795676 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:47106.service: Deactivated successfully. May 14 00:04:50.798559 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:04:50.799532 systemd-logind[1502]: Session 25 logged out. Waiting for processes to exit. May 14 00:04:50.800964 systemd-logind[1502]: Removed session 25. May 14 00:04:55.808156 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:47122.service - OpenSSH per-connection server daemon (10.0.0.1:47122). May 14 00:04:55.866537 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 47122 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:04:55.869315 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:55.876112 systemd-logind[1502]: New session 26 of user core. May 14 00:04:55.887544 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:04:56.019132 sshd[4289]: Connection closed by 10.0.0.1 port 47122 May 14 00:04:56.019415 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 14 00:04:56.025099 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:47122.service: Deactivated successfully. May 14 00:04:56.027471 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:04:56.028244 systemd-logind[1502]: Session 26 logged out. Waiting for processes to exit. May 14 00:04:56.029255 systemd-logind[1502]: Removed session 26. May 14 00:05:01.033350 systemd[1]: Started sshd@27-10.0.0.109:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). May 14 00:05:01.086356 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:01.097943 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:01.103622 systemd-logind[1502]: New session 27 of user core. May 14 00:05:01.115495 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:05:01.343603 sshd[4304]: Connection closed by 10.0.0.1 port 53926 May 14 00:05:01.343904 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 14 00:05:01.348432 systemd[1]: sshd@27-10.0.0.109:22-10.0.0.1:53926.service: Deactivated successfully. May 14 00:05:01.351084 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:05:01.351920 systemd-logind[1502]: Session 27 logged out. Waiting for processes to exit. May 14 00:05:01.352879 systemd-logind[1502]: Removed session 27. May 14 00:05:06.361893 systemd[1]: Started sshd@28-10.0.0.109:22-10.0.0.1:53936.service - OpenSSH per-connection server daemon (10.0.0.1:53936). May 14 00:05:06.416587 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 53936 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:06.476039 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:06.483302 systemd-logind[1502]: New session 28 of user core. May 14 00:05:06.487433 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:05:06.602707 sshd[4325]: Connection closed by 10.0.0.1 port 53936 May 14 00:05:06.603037 sshd-session[4320]: pam_unix(sshd:session): session closed for user core May 14 00:05:06.606970 systemd[1]: sshd@28-10.0.0.109:22-10.0.0.1:53936.service: Deactivated successfully. May 14 00:05:06.609245 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:05:06.609980 systemd-logind[1502]: Session 28 logged out. Waiting for processes to exit. May 14 00:05:06.611739 systemd-logind[1502]: Removed session 28. May 14 00:05:07.374443 kubelet[2670]: E0514 00:05:07.374396 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:11.626596 systemd[1]: Started sshd@29-10.0.0.109:22-10.0.0.1:51872.service - OpenSSH per-connection server daemon (10.0.0.1:51872). May 14 00:05:11.673839 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 51872 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:11.675831 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:11.681511 systemd-logind[1502]: New session 29 of user core. May 14 00:05:11.688329 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:05:11.803768 sshd[4342]: Connection closed by 10.0.0.1 port 51872 May 14 00:05:11.804242 sshd-session[4340]: pam_unix(sshd:session): session closed for user core May 14 00:05:11.809098 systemd[1]: sshd@29-10.0.0.109:22-10.0.0.1:51872.service: Deactivated successfully. May 14 00:05:11.811864 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:05:11.812882 systemd-logind[1502]: Session 29 logged out. Waiting for processes to exit. May 14 00:05:11.813968 systemd-logind[1502]: Removed session 29. May 14 00:05:13.375175 kubelet[2670]: E0514 00:05:13.375065 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:16.819826 systemd[1]: Started sshd@30-10.0.0.109:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). May 14 00:05:16.871711 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:16.957629 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:16.963580 systemd-logind[1502]: New session 30 of user core. May 14 00:05:16.978488 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 00:05:17.149297 sshd[4360]: Connection closed by 10.0.0.1 port 51876 May 14 00:05:17.149605 sshd-session[4358]: pam_unix(sshd:session): session closed for user core May 14 00:05:17.154119 systemd[1]: sshd@30-10.0.0.109:22-10.0.0.1:51876.service: Deactivated successfully. May 14 00:05:17.156433 systemd[1]: session-30.scope: Deactivated successfully. May 14 00:05:17.157102 systemd-logind[1502]: Session 30 logged out. Waiting for processes to exit. May 14 00:05:17.157881 systemd-logind[1502]: Removed session 30. May 14 00:05:20.374619 kubelet[2670]: E0514 00:05:20.374563 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:22.168648 systemd[1]: Started sshd@31-10.0.0.109:22-10.0.0.1:47296.service - OpenSSH per-connection server daemon (10.0.0.1:47296). May 14 00:05:22.215485 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 47296 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:22.218925 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:22.224832 systemd-logind[1502]: New session 31 of user core. May 14 00:05:22.234532 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 00:05:22.929336 sshd[4376]: Connection closed by 10.0.0.1 port 47296 May 14 00:05:22.929753 sshd-session[4374]: pam_unix(sshd:session): session closed for user core May 14 00:05:22.934478 systemd[1]: sshd@31-10.0.0.109:22-10.0.0.1:47296.service: Deactivated successfully. May 14 00:05:22.936621 systemd[1]: session-31.scope: Deactivated successfully. May 14 00:05:22.937387 systemd-logind[1502]: Session 31 logged out. Waiting for processes to exit. May 14 00:05:22.938435 systemd-logind[1502]: Removed session 31. May 14 00:05:24.374915 kubelet[2670]: E0514 00:05:24.374842 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:27.949338 systemd[1]: Started sshd@32-10.0.0.109:22-10.0.0.1:47308.service - OpenSSH per-connection server daemon (10.0.0.1:47308). May 14 00:05:28.011029 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:28.012877 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:28.018527 systemd-logind[1502]: New session 32 of user core. May 14 00:05:28.031466 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 00:05:28.143283 sshd[4393]: Connection closed by 10.0.0.1 port 47308 May 14 00:05:28.143670 sshd-session[4391]: pam_unix(sshd:session): session closed for user core May 14 00:05:28.156899 systemd[1]: sshd@32-10.0.0.109:22-10.0.0.1:47308.service: Deactivated successfully. May 14 00:05:28.159141 systemd[1]: session-32.scope: Deactivated successfully. May 14 00:05:28.161098 systemd-logind[1502]: Session 32 logged out. Waiting for processes to exit. May 14 00:05:28.162799 systemd[1]: Started sshd@33-10.0.0.109:22-10.0.0.1:54732.service - OpenSSH per-connection server daemon (10.0.0.1:54732). May 14 00:05:28.163976 systemd-logind[1502]: Removed session 32. May 14 00:05:28.219853 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 54732 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:28.222017 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:28.227379 systemd-logind[1502]: New session 33 of user core. May 14 00:05:28.236501 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 00:05:30.082630 containerd[1521]: time="2025-05-14T00:05:30.082577782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" id:\"221417d227478b0e750627f156cda18a76e10ad545ab987892f2cef59ae0c854\" pid:4429 exited_at:{seconds:1747181130 nanos:81997938}" May 14 00:05:30.084354 containerd[1521]: time="2025-05-14T00:05:30.084278418Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:05:30.084971 containerd[1521]: time="2025-05-14T00:05:30.084931980Z" level=info msg="StopContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" with timeout 2 (s)" May 14 00:05:30.085282 containerd[1521]: time="2025-05-14T00:05:30.085254899Z" level=info msg="Stop container \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" with signal terminated" May 14 00:05:30.093196 systemd-networkd[1444]: lxc_health: Link DOWN May 14 00:05:30.093221 systemd-networkd[1444]: lxc_health: Lost carrier May 14 00:05:30.112926 systemd[1]: cri-containerd-63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523.scope: Deactivated successfully. May 14 00:05:30.113744 systemd[1]: cri-containerd-63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523.scope: Consumed 8.238s CPU time, 124.9M memory peak, 244K read from disk, 13.3M written to disk. May 14 00:05:30.115498 containerd[1521]: time="2025-05-14T00:05:30.115456377Z" level=info msg="received exit event container_id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" pid:3288 exited_at:{seconds:1747181130 nanos:115219861}" May 14 00:05:30.115748 containerd[1521]: time="2025-05-14T00:05:30.115679748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" id:\"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" pid:3288 exited_at:{seconds:1747181130 nanos:115219861}" May 14 00:05:30.138081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523-rootfs.mount: Deactivated successfully. May 14 00:05:30.539929 containerd[1521]: time="2025-05-14T00:05:30.539725552Z" level=info msg="StopContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" with timeout 30 (s)" May 14 00:05:30.541008 containerd[1521]: time="2025-05-14T00:05:30.540944892Z" level=info msg="Stop container \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" with signal terminated" May 14 00:05:30.553993 systemd[1]: cri-containerd-39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2.scope: Deactivated successfully. May 14 00:05:30.555729 containerd[1521]: time="2025-05-14T00:05:30.555647520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" id:\"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" pid:3403 exited_at:{seconds:1747181130 nanos:554995140}" May 14 00:05:30.556185 containerd[1521]: time="2025-05-14T00:05:30.556132284Z" level=info msg="received exit event container_id:\"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" id:\"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" pid:3403 exited_at:{seconds:1747181130 nanos:554995140}" May 14 00:05:30.582281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2-rootfs.mount: Deactivated successfully. May 14 00:05:30.935002 containerd[1521]: time="2025-05-14T00:05:30.934934934Z" level=info msg="StopContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" returns successfully" May 14 00:05:30.935705 containerd[1521]: time="2025-05-14T00:05:30.935658037Z" level=info msg="StopPodSandbox for \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\"" May 14 00:05:31.016592 containerd[1521]: time="2025-05-14T00:05:31.016504823Z" level=info msg="Container to stop \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.024802056Z" level=info msg="StopContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" returns successfully" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025078327Z" level=info msg="StopPodSandbox for \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025149571Z" level=info msg="Container to stop \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025162125Z" level=info msg="Container to stop \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025171342Z" level=info msg="Container to stop \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025180099Z" level=info msg="Container to stop \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.025443 containerd[1521]: time="2025-05-14T00:05:31.025188003Z" level=info msg="Container to stop \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:31.028896 systemd[1]: cri-containerd-afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910.scope: Deactivated successfully. May 14 00:05:31.031713 containerd[1521]: time="2025-05-14T00:05:31.031673430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" id:\"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" pid:2873 exit_status:137 exited_at:{seconds:1747181131 nanos:31417597}" May 14 00:05:31.041421 systemd[1]: cri-containerd-6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041.scope: Deactivated successfully. May 14 00:05:31.062913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041-rootfs.mount: Deactivated successfully. May 14 00:05:31.068736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910-rootfs.mount: Deactivated successfully. May 14 00:05:31.368573 containerd[1521]: time="2025-05-14T00:05:31.368533571Z" level=info msg="shim disconnected" id=6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041 namespace=k8s.io May 14 00:05:31.368573 containerd[1521]: time="2025-05-14T00:05:31.368573337Z" level=warning msg="cleaning up after shim disconnected" id=6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041 namespace=k8s.io May 14 00:05:31.369062 containerd[1521]: time="2025-05-14T00:05:31.368554551Z" level=info msg="shim disconnected" id=afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910 namespace=k8s.io May 14 00:05:31.369062 containerd[1521]: time="2025-05-14T00:05:31.368615145Z" level=warning msg="cleaning up after shim disconnected" id=afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910 namespace=k8s.io May 14 00:05:31.374680 containerd[1521]: time="2025-05-14T00:05:31.368625665Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:05:31.374752 containerd[1521]: time="2025-05-14T00:05:31.368583336Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:05:31.398283 containerd[1521]: time="2025-05-14T00:05:31.398188858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" id:\"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" pid:2784 exit_status:137 exited_at:{seconds:1747181131 nanos:41021313}" May 14 00:05:31.401136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910-shm.mount: Deactivated successfully. May 14 00:05:31.401309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041-shm.mount: Deactivated successfully. May 14 00:05:31.420555 containerd[1521]: time="2025-05-14T00:05:31.420471816Z" level=info msg="TearDown network for sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" successfully" May 14 00:05:31.420555 containerd[1521]: time="2025-05-14T00:05:31.420534474Z" level=info msg="StopPodSandbox for \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" returns successfully" May 14 00:05:31.427266 containerd[1521]: time="2025-05-14T00:05:31.426265950Z" level=info msg="received exit event sandbox_id:\"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" exit_status:137 exited_at:{seconds:1747181131 nanos:31417597}" May 14 00:05:31.427266 containerd[1521]: time="2025-05-14T00:05:31.426357983Z" level=info msg="received exit event sandbox_id:\"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" exit_status:137 exited_at:{seconds:1747181131 nanos:41021313}" May 14 00:05:31.443323 containerd[1521]: time="2025-05-14T00:05:31.443239819Z" level=info msg="TearDown network for sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" successfully" May 14 00:05:31.443323 containerd[1521]: time="2025-05-14T00:05:31.443296205Z" level=info msg="StopPodSandbox for \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" returns successfully" May 14 00:05:31.537102 kubelet[2670]: I0514 00:05:31.537023 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5b6\" (UniqueName: \"kubernetes.io/projected/3180526a-450e-4fa4-aa2e-9d5d13c41558-kube-api-access-nf5b6\") pod \"3180526a-450e-4fa4-aa2e-9d5d13c41558\" (UID: \"3180526a-450e-4fa4-aa2e-9d5d13c41558\") " May 14 00:05:31.537102 kubelet[2670]: I0514 00:05:31.537111 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3180526a-450e-4fa4-aa2e-9d5d13c41558-cilium-config-path\") pod \"3180526a-450e-4fa4-aa2e-9d5d13c41558\" (UID: \"3180526a-450e-4fa4-aa2e-9d5d13c41558\") " May 14 00:05:31.540813 kubelet[2670]: I0514 00:05:31.540775 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3180526a-450e-4fa4-aa2e-9d5d13c41558-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3180526a-450e-4fa4-aa2e-9d5d13c41558" (UID: "3180526a-450e-4fa4-aa2e-9d5d13c41558"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:05:31.546343 kubelet[2670]: I0514 00:05:31.546276 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3180526a-450e-4fa4-aa2e-9d5d13c41558-kube-api-access-nf5b6" (OuterVolumeSpecName: "kube-api-access-nf5b6") pod "3180526a-450e-4fa4-aa2e-9d5d13c41558" (UID: "3180526a-450e-4fa4-aa2e-9d5d13c41558"). InnerVolumeSpecName "kube-api-access-nf5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:05:31.547794 systemd[1]: var-lib-kubelet-pods-3180526a\x2d450e\x2d4fa4\x2daa2e\x2d9d5d13c41558-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnf5b6.mount: Deactivated successfully. May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637419 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-config-path\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637491 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-lib-modules\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637519 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq5qn\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-kube-api-access-fq5qn\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637545 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-cgroup\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637567 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cni-path\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637628 kubelet[2670]: I0514 00:05:31.637585 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-run\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637604 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-kernel\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637625 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-xtables-lock\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637638 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637644 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-bpf-maps\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637672 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.637986 kubelet[2670]: I0514 00:05:31.637693 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-net\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.638200 kubelet[2670]: I0514 00:05:31.637701 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cni-path" (OuterVolumeSpecName: "cni-path") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638200 kubelet[2670]: I0514 00:05:31.637712 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-hostproc\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.638200 kubelet[2670]: I0514 00:05:31.637720 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638200 kubelet[2670]: I0514 00:05:31.637728 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-etc-cni-netd\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.638200 kubelet[2670]: I0514 00:05:31.637741 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637747 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4956121a-a340-4510-a7ef-1f247bd5ebf1-clustermesh-secrets\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637772 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-hubble-tls\") pod \"4956121a-a340-4510-a7ef-1f247bd5ebf1\" (UID: \"4956121a-a340-4510-a7ef-1f247bd5ebf1\") " May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637772 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637796 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-hostproc" (OuterVolumeSpecName: "hostproc") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637805 2670 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638418 kubelet[2670]: I0514 00:05:31.637814 2670 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637823 2670 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637834 2670 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637843 2670 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nf5b6\" (UniqueName: \"kubernetes.io/projected/3180526a-450e-4fa4-aa2e-9d5d13c41558-kube-api-access-nf5b6\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637851 2670 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637860 2670 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3180526a-450e-4fa4-aa2e-9d5d13c41558-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.637868 2670 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.638644 kubelet[2670]: I0514 00:05:31.638196 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.638964 kubelet[2670]: I0514 00:05:31.638255 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.642811 kubelet[2670]: I0514 00:05:31.642679 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-kube-api-access-fq5qn" (OuterVolumeSpecName: "kube-api-access-fq5qn") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "kube-api-access-fq5qn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:05:31.642811 kubelet[2670]: I0514 00:05:31.642770 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:05:31.642981 systemd[1]: var-lib-kubelet-pods-4956121a\x2da340\x2d4510\x2da7ef\x2d1f247bd5ebf1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfq5qn.mount: Deactivated successfully. May 14 00:05:31.643302 kubelet[2670]: I0514 00:05:31.643153 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:05:31.643133 systemd[1]: var-lib-kubelet-pods-4956121a\x2da340\x2d4510\x2da7ef\x2d1f247bd5ebf1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:05:31.643491 kubelet[2670]: I0514 00:05:31.643458 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4956121a-a340-4510-a7ef-1f247bd5ebf1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:05:31.645018 kubelet[2670]: I0514 00:05:31.644976 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4956121a-a340-4510-a7ef-1f247bd5ebf1" (UID: "4956121a-a340-4510-a7ef-1f247bd5ebf1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:05:31.646711 systemd[1]: var-lib-kubelet-pods-4956121a\x2da340\x2d4510\x2da7ef\x2d1f247bd5ebf1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:05:31.738108 kubelet[2670]: I0514 00:05:31.738050 2670 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738108 kubelet[2670]: I0514 00:05:31.738086 2670 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738108 kubelet[2670]: I0514 00:05:31.738098 2670 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738108 kubelet[2670]: I0514 00:05:31.738110 2670 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738108 kubelet[2670]: I0514 00:05:31.738122 2670 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4956121a-a340-4510-a7ef-1f247bd5ebf1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738389 kubelet[2670]: I0514 00:05:31.738133 2670 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4956121a-a340-4510-a7ef-1f247bd5ebf1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738389 kubelet[2670]: I0514 00:05:31.738144 2670 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4956121a-a340-4510-a7ef-1f247bd5ebf1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.738389 kubelet[2670]: I0514 00:05:31.738156 2670 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fq5qn\" (UniqueName: \"kubernetes.io/projected/4956121a-a340-4510-a7ef-1f247bd5ebf1-kube-api-access-fq5qn\") on node \"localhost\" DevicePath \"\"" May 14 00:05:31.773404 sshd[4409]: Connection closed by 10.0.0.1 port 54732 May 14 00:05:31.773868 sshd-session[4406]: pam_unix(sshd:session): session closed for user core May 14 00:05:31.790864 systemd[1]: sshd@33-10.0.0.109:22-10.0.0.1:54732.service: Deactivated successfully. May 14 00:05:31.793416 systemd[1]: session-33.scope: Deactivated successfully. May 14 00:05:31.795690 systemd-logind[1502]: Session 33 logged out. Waiting for processes to exit. May 14 00:05:31.797255 systemd[1]: Started sshd@34-10.0.0.109:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). May 14 00:05:31.798118 systemd-logind[1502]: Removed session 33. May 14 00:05:31.861367 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:31.863499 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:31.869440 systemd-logind[1502]: New session 34 of user core. May 14 00:05:31.877828 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 00:05:32.041500 kubelet[2670]: I0514 00:05:32.041359 2670 scope.go:117] "RemoveContainer" containerID="63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523" May 14 00:05:32.043631 containerd[1521]: time="2025-05-14T00:05:32.043583128Z" level=info msg="RemoveContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\"" May 14 00:05:32.049199 systemd[1]: Removed slice kubepods-burstable-pod4956121a_a340_4510_a7ef_1f247bd5ebf1.slice - libcontainer container kubepods-burstable-pod4956121a_a340_4510_a7ef_1f247bd5ebf1.slice. May 14 00:05:32.049403 systemd[1]: kubepods-burstable-pod4956121a_a340_4510_a7ef_1f247bd5ebf1.slice: Consumed 8.356s CPU time, 125.2M memory peak, 248K read from disk, 13.3M written to disk. May 14 00:05:32.051949 systemd[1]: Removed slice kubepods-besteffort-pod3180526a_450e_4fa4_aa2e_9d5d13c41558.slice - libcontainer container kubepods-besteffort-pod3180526a_450e_4fa4_aa2e_9d5d13c41558.slice. May 14 00:05:32.429704 containerd[1521]: time="2025-05-14T00:05:32.429642785Z" level=info msg="RemoveContainer for \"63772d8568bd67b9cc9efe09ad6955bf986c84b38d478d1b6f8c30630c721523\" returns successfully" May 14 00:05:32.430190 kubelet[2670]: I0514 00:05:32.430089 2670 scope.go:117] "RemoveContainer" containerID="095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57" May 14 00:05:32.432356 containerd[1521]: time="2025-05-14T00:05:32.432310835Z" level=info msg="RemoveContainer for \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\"" May 14 00:05:32.684858 containerd[1521]: time="2025-05-14T00:05:32.684556102Z" level=info msg="RemoveContainer for \"095822dde5e6ae351d9c0ec493e41f20c1513cd669002761c004b23184b70f57\" returns successfully" May 14 00:05:32.685009 kubelet[2670]: I0514 00:05:32.684930 2670 scope.go:117] "RemoveContainer" containerID="fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd" May 14 00:05:32.689864 containerd[1521]: time="2025-05-14T00:05:32.689813453Z" level=info msg="RemoveContainer for \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\"" May 14 00:05:32.974765 containerd[1521]: time="2025-05-14T00:05:32.974572284Z" level=info msg="RemoveContainer for \"fbb998b908446f652c73090cc19219c50abbdbb326bb8f691b966efe430819cd\" returns successfully" May 14 00:05:32.974950 kubelet[2670]: I0514 00:05:32.974886 2670 scope.go:117] "RemoveContainer" containerID="440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390" May 14 00:05:32.977124 containerd[1521]: time="2025-05-14T00:05:32.977079469Z" level=info msg="RemoveContainer for \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\"" May 14 00:05:33.698892 containerd[1521]: time="2025-05-14T00:05:33.698745524Z" level=info msg="RemoveContainer for \"440f46c063bd9e0e57a0e92f4340863a8d346ae98e381b9700ef9fd838038390\" returns successfully" May 14 00:05:33.700356 kubelet[2670]: I0514 00:05:33.699755 2670 scope.go:117] "RemoveContainer" containerID="fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7" May 14 00:05:33.701573 containerd[1521]: time="2025-05-14T00:05:33.701531406Z" level=info msg="RemoveContainer for \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\"" May 14 00:05:33.835934 containerd[1521]: time="2025-05-14T00:05:33.835872208Z" level=info msg="RemoveContainer for \"fbdfe5c77043adc5d582ac11b0d864adce2322823a2850e0e4726ea2501690f7\" returns successfully" May 14 00:05:33.836184 kubelet[2670]: I0514 00:05:33.836146 2670 scope.go:117] "RemoveContainer" containerID="39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2" May 14 00:05:33.838038 containerd[1521]: time="2025-05-14T00:05:33.838002403Z" level=info msg="RemoveContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\"" May 14 00:05:33.985606 containerd[1521]: time="2025-05-14T00:05:33.985464640Z" level=info msg="RemoveContainer for \"39ab4f8a768cc69b0ccd712688d0f321d45a59eb9de3959308af120cba17efe2\" returns successfully" May 14 00:05:34.473945 kubelet[2670]: E0514 00:05:34.473889 2670 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:35.117288 sshd[4560]: Connection closed by 10.0.0.1 port 54738 May 14 00:05:35.117721 sshd-session[4557]: pam_unix(sshd:session): session closed for user core May 14 00:05:35.136763 systemd[1]: sshd@34-10.0.0.109:22-10.0.0.1:54738.service: Deactivated successfully. May 14 00:05:35.139441 systemd[1]: session-34.scope: Deactivated successfully. May 14 00:05:35.141356 systemd-logind[1502]: Session 34 logged out. Waiting for processes to exit. May 14 00:05:35.143106 systemd[1]: Started sshd@35-10.0.0.109:22-10.0.0.1:54746.service - OpenSSH per-connection server daemon (10.0.0.1:54746). May 14 00:05:35.144146 systemd-logind[1502]: Removed session 34. May 14 00:05:35.201699 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:35.203414 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:35.208707 systemd-logind[1502]: New session 35 of user core. May 14 00:05:35.218561 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 00:05:35.275370 sshd[4574]: Connection closed by 10.0.0.1 port 54746 May 14 00:05:35.275895 sshd-session[4571]: pam_unix(sshd:session): session closed for user core May 14 00:05:35.289243 systemd[1]: sshd@35-10.0.0.109:22-10.0.0.1:54746.service: Deactivated successfully. May 14 00:05:35.291270 systemd[1]: session-35.scope: Deactivated successfully. May 14 00:05:35.293165 systemd-logind[1502]: Session 35 logged out. Waiting for processes to exit. May 14 00:05:35.294570 systemd[1]: Started sshd@36-10.0.0.109:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). May 14 00:05:35.295630 systemd-logind[1502]: Removed session 35. May 14 00:05:35.341715 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:05:35.343958 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:35.349179 systemd-logind[1502]: New session 36 of user core. May 14 00:05:35.362482 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 00:05:35.377810 kubelet[2670]: I0514 00:05:35.377646 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3180526a-450e-4fa4-aa2e-9d5d13c41558" path="/var/lib/kubelet/pods/3180526a-450e-4fa4-aa2e-9d5d13c41558/volumes" May 14 00:05:35.378483 kubelet[2670]: I0514 00:05:35.378448 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" path="/var/lib/kubelet/pods/4956121a-a340-4510-a7ef-1f247bd5ebf1/volumes" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858243 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="mount-cgroup" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858289 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="clean-cilium-state" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858298 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="cilium-agent" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858307 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="apply-sysctl-overwrites" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858314 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="mount-bpf-fs" May 14 00:05:35.858647 kubelet[2670]: E0514 00:05:35.858323 2670 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3180526a-450e-4fa4-aa2e-9d5d13c41558" containerName="cilium-operator" May 14 00:05:35.858647 kubelet[2670]: I0514 00:05:35.858355 2670 memory_manager.go:354] "RemoveStaleState removing state" podUID="4956121a-a340-4510-a7ef-1f247bd5ebf1" containerName="cilium-agent" May 14 00:05:35.858647 kubelet[2670]: I0514 00:05:35.858363 2670 memory_manager.go:354] "RemoveStaleState removing state" podUID="3180526a-450e-4fa4-aa2e-9d5d13c41558" containerName="cilium-operator" May 14 00:05:35.868722 systemd[1]: Created slice kubepods-burstable-pod7c3a037c_e2d1_44bb_a852_8a096375006d.slice - libcontainer container kubepods-burstable-pod7c3a037c_e2d1_44bb_a852_8a096375006d.slice. May 14 00:05:35.965061 kubelet[2670]: I0514 00:05:35.964985 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-xtables-lock\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965061 kubelet[2670]: I0514 00:05:35.965038 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-cilium-run\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965061 kubelet[2670]: I0514 00:05:35.965061 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c3a037c-e2d1-44bb-a852-8a096375006d-hubble-tls\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965300 kubelet[2670]: I0514 00:05:35.965094 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c3a037c-e2d1-44bb-a852-8a096375006d-cilium-ipsec-secrets\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965300 kubelet[2670]: I0514 00:05:35.965116 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-host-proc-sys-net\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965300 kubelet[2670]: I0514 00:05:35.965140 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-host-proc-sys-kernel\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965300 kubelet[2670]: I0514 00:05:35.965161 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98cg\" (UniqueName: \"kubernetes.io/projected/7c3a037c-e2d1-44bb-a852-8a096375006d-kube-api-access-r98cg\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965300 kubelet[2670]: I0514 00:05:35.965196 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-bpf-maps\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965253 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-lib-modules\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965284 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-cilium-cgroup\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965302 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-cni-path\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965328 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c3a037c-e2d1-44bb-a852-8a096375006d-clustermesh-secrets\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965357 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c3a037c-e2d1-44bb-a852-8a096375006d-cilium-config-path\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965493 kubelet[2670]: I0514 00:05:35.965379 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-hostproc\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:35.965668 kubelet[2670]: I0514 00:05:35.965413 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c3a037c-e2d1-44bb-a852-8a096375006d-etc-cni-netd\") pod \"cilium-hpn2k\" (UID: \"7c3a037c-e2d1-44bb-a852-8a096375006d\") " pod="kube-system/cilium-hpn2k" May 14 00:05:37.371359 kubelet[2670]: E0514 00:05:37.371298 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:37.372017 containerd[1521]: time="2025-05-14T00:05:37.371944910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpn2k,Uid:7c3a037c-e2d1-44bb-a852-8a096375006d,Namespace:kube-system,Attempt:0,}" May 14 00:05:38.352404 containerd[1521]: time="2025-05-14T00:05:38.352339093Z" level=info msg="connecting to shim ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" namespace=k8s.io protocol=ttrpc version=3 May 14 00:05:38.382576 systemd[1]: Started cri-containerd-ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773.scope - libcontainer container ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773. May 14 00:05:38.522135 containerd[1521]: time="2025-05-14T00:05:38.522058094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpn2k,Uid:7c3a037c-e2d1-44bb-a852-8a096375006d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\"" May 14 00:05:38.523514 kubelet[2670]: E0514 00:05:38.523487 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:38.525744 containerd[1521]: time="2025-05-14T00:05:38.525696842Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:05:38.925583 containerd[1521]: time="2025-05-14T00:05:38.925503365Z" level=info msg="Container e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:39.201950 containerd[1521]: time="2025-05-14T00:05:39.201804988Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\"" May 14 00:05:39.202526 containerd[1521]: time="2025-05-14T00:05:39.202483888Z" level=info msg="StartContainer for \"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\"" May 14 00:05:39.203674 containerd[1521]: time="2025-05-14T00:05:39.203639886Z" level=info msg="connecting to shim e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" protocol=ttrpc version=3 May 14 00:05:39.226388 systemd[1]: Started cri-containerd-e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d.scope - libcontainer container e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d. May 14 00:05:39.474949 kubelet[2670]: E0514 00:05:39.474807 2670 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:39.528498 containerd[1521]: time="2025-05-14T00:05:39.528409504Z" level=info msg="StartContainer for \"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\" returns successfully" May 14 00:05:39.587739 systemd[1]: cri-containerd-e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d.scope: Deactivated successfully. May 14 00:05:39.589728 containerd[1521]: time="2025-05-14T00:05:39.589689730Z" level=info msg="received exit event container_id:\"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\" id:\"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\" pid:4654 exited_at:{seconds:1747181139 nanos:589400995}" May 14 00:05:39.589812 containerd[1521]: time="2025-05-14T00:05:39.589784028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\" id:\"e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d\" pid:4654 exited_at:{seconds:1747181139 nanos:589400995}" May 14 00:05:39.614056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8986f714585a5962003bdcf1cc4a763abf5b818ccb2d244268691f459e0153d-rootfs.mount: Deactivated successfully. May 14 00:05:40.234662 kubelet[2670]: E0514 00:05:40.234568 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:40.236377 containerd[1521]: time="2025-05-14T00:05:40.236335238Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:05:40.374866 kubelet[2670]: E0514 00:05:40.374803 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:40.680690 containerd[1521]: time="2025-05-14T00:05:40.680623581Z" level=info msg="Container 48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:40.681403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461076844.mount: Deactivated successfully. May 14 00:05:40.969230 containerd[1521]: time="2025-05-14T00:05:40.969065395Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\"" May 14 00:05:40.969802 containerd[1521]: time="2025-05-14T00:05:40.969778169Z" level=info msg="StartContainer for \"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\"" May 14 00:05:40.970632 containerd[1521]: time="2025-05-14T00:05:40.970607652Z" level=info msg="connecting to shim 48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" protocol=ttrpc version=3 May 14 00:05:40.996502 systemd[1]: Started cri-containerd-48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d.scope - libcontainer container 48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d. May 14 00:05:41.033654 systemd[1]: cri-containerd-48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d.scope: Deactivated successfully. May 14 00:05:41.035273 containerd[1521]: time="2025-05-14T00:05:41.035202008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\" id:\"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\" pid:4700 exited_at:{seconds:1747181141 nanos:34850836}" May 14 00:05:41.374393 kubelet[2670]: E0514 00:05:41.374344 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:41.377342 containerd[1521]: time="2025-05-14T00:05:41.377286172Z" level=info msg="received exit event container_id:\"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\" id:\"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\" pid:4700 exited_at:{seconds:1747181141 nanos:34850836}" May 14 00:05:41.378579 containerd[1521]: time="2025-05-14T00:05:41.378552539Z" level=info msg="StartContainer for \"48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d\" returns successfully" May 14 00:05:41.398215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48f376e179165fe39d34235233e5b8afddb4d507a957c06200bb565bfa25028d-rootfs.mount: Deactivated successfully. May 14 00:05:42.384276 kubelet[2670]: E0514 00:05:42.384234 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:42.385784 containerd[1521]: time="2025-05-14T00:05:42.385737016Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:05:42.673709 containerd[1521]: time="2025-05-14T00:05:42.673514178Z" level=info msg="Container 7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:42.872730 containerd[1521]: time="2025-05-14T00:05:42.872662265Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\"" May 14 00:05:42.873561 containerd[1521]: time="2025-05-14T00:05:42.873510393Z" level=info msg="StartContainer for \"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\"" May 14 00:05:42.875036 containerd[1521]: time="2025-05-14T00:05:42.875002995Z" level=info msg="connecting to shim 7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" protocol=ttrpc version=3 May 14 00:05:42.894578 kubelet[2670]: I0514 00:05:42.894511 2670 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:05:42Z","lastTransitionTime":"2025-05-14T00:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:05:42.896361 systemd[1]: Started cri-containerd-7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d.scope - libcontainer container 7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d. May 14 00:05:43.139050 systemd[1]: cri-containerd-7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d.scope: Deactivated successfully. May 14 00:05:43.140567 containerd[1521]: time="2025-05-14T00:05:43.140497619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\" id:\"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\" pid:4744 exited_at:{seconds:1747181143 nanos:140085342}" May 14 00:05:43.147180 containerd[1521]: time="2025-05-14T00:05:43.147126120Z" level=info msg="received exit event container_id:\"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\" id:\"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\" pid:4744 exited_at:{seconds:1747181143 nanos:140085342}" May 14 00:05:43.156001 containerd[1521]: time="2025-05-14T00:05:43.155930402Z" level=info msg="StartContainer for \"7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d\" returns successfully" May 14 00:05:43.168717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7953af6a368df6b94a91958683c9b48f6de31d7cbda5ee721e8f04a02b5c592d-rootfs.mount: Deactivated successfully. May 14 00:05:43.388471 kubelet[2670]: E0514 00:05:43.388440 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:44.393933 kubelet[2670]: E0514 00:05:44.393893 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:44.395738 containerd[1521]: time="2025-05-14T00:05:44.395699899Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:05:44.476075 kubelet[2670]: E0514 00:05:44.476023 2670 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:45.096991 containerd[1521]: time="2025-05-14T00:05:45.095958934Z" level=info msg="Container 52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:45.389598 containerd[1521]: time="2025-05-14T00:05:45.389483657Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\"" May 14 00:05:45.390168 containerd[1521]: time="2025-05-14T00:05:45.390111800Z" level=info msg="StartContainer for \"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\"" May 14 00:05:45.391137 containerd[1521]: time="2025-05-14T00:05:45.391106936Z" level=info msg="connecting to shim 52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" protocol=ttrpc version=3 May 14 00:05:45.419519 systemd[1]: Started cri-containerd-52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c.scope - libcontainer container 52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c. May 14 00:05:45.452841 systemd[1]: cri-containerd-52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c.scope: Deactivated successfully. May 14 00:05:45.453498 containerd[1521]: time="2025-05-14T00:05:45.453346490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\" id:\"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\" pid:4783 exited_at:{seconds:1747181145 nanos:452975231}" May 14 00:05:45.553287 containerd[1521]: time="2025-05-14T00:05:45.553196800Z" level=info msg="received exit event container_id:\"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\" id:\"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\" pid:4783 exited_at:{seconds:1747181145 nanos:452975231}" May 14 00:05:45.563436 containerd[1521]: time="2025-05-14T00:05:45.563380951Z" level=info msg="StartContainer for \"52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c\" returns successfully" May 14 00:05:45.577707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52f85de6910a9a2ce79c90843348948e9b62c81143ca21cf19d6acdacf3f377c-rootfs.mount: Deactivated successfully. May 14 00:05:46.405918 kubelet[2670]: E0514 00:05:46.405876 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:46.408784 containerd[1521]: time="2025-05-14T00:05:46.408731594Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:05:46.770314 containerd[1521]: time="2025-05-14T00:05:46.767137924Z" level=info msg="Container d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:47.389012 containerd[1521]: time="2025-05-14T00:05:47.388942287Z" level=info msg="CreateContainer within sandbox \"ee5fd478bd4033b041bcc711d3b6f7374f8caf4cec732cadb7ccb2016f171773\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\"" May 14 00:05:47.389600 containerd[1521]: time="2025-05-14T00:05:47.389575040Z" level=info msg="StartContainer for \"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\"" May 14 00:05:47.390622 containerd[1521]: time="2025-05-14T00:05:47.390598408Z" level=info msg="connecting to shim d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246" address="unix:///run/containerd/s/9e95e809c19f16bbce2c20f2ea6319603b21edf06d40bab4b2cb1e44c151edbb" protocol=ttrpc version=3 May 14 00:05:47.410380 systemd[1]: Started cri-containerd-d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246.scope - libcontainer container d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246. May 14 00:05:47.613023 containerd[1521]: time="2025-05-14T00:05:47.612842569Z" level=info msg="StartContainer for \"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" returns successfully" May 14 00:05:47.689314 containerd[1521]: time="2025-05-14T00:05:47.689183079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"a4a64af44a17635b4fca8c8aa537dd38b3270e3ab34490d330eabb1eb39c5c02\" pid:4859 exited_at:{seconds:1747181147 nanos:688894947}" May 14 00:05:48.077247 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 00:05:48.416973 kubelet[2670]: E0514 00:05:48.416892 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:48.737470 kubelet[2670]: I0514 00:05:48.736144 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hpn2k" podStartSLOduration=13.736127535 podStartE2EDuration="13.736127535s" podCreationTimestamp="2025-05-14 00:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:05:48.735379596 +0000 UTC m=+159.440808355" watchObservedRunningTime="2025-05-14 00:05:48.736127535 +0000 UTC m=+159.441556294" May 14 00:05:49.418754 kubelet[2670]: E0514 00:05:49.418687 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:49.795334 containerd[1521]: time="2025-05-14T00:05:49.795140717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"6cc038141ee1f79c9e7ffb871dc183e593e39bc2de0fc44c291263d222f9b3b0\" pid:4962 exit_status:1 exited_at:{seconds:1747181149 nanos:794747066}" May 14 00:05:50.420643 kubelet[2670]: E0514 00:05:50.420595 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:52.212383 containerd[1521]: time="2025-05-14T00:05:52.212309589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"d996bf3fcf90881634d32978dbb2763b1a9520a7e7299018eda82f78bceaa5c3\" pid:5357 exit_status:1 exited_at:{seconds:1747181152 nanos:211740947}" May 14 00:05:52.295643 systemd-networkd[1444]: lxc_health: Link UP May 14 00:05:52.296948 systemd-networkd[1444]: lxc_health: Gained carrier May 14 00:05:53.363539 systemd-networkd[1444]: lxc_health: Gained IPv6LL May 14 00:05:53.373551 kubelet[2670]: E0514 00:05:53.373500 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:53.426543 kubelet[2670]: E0514 00:05:53.426474 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:54.428546 kubelet[2670]: E0514 00:05:54.428507 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:56.075422 containerd[1521]: time="2025-05-14T00:05:56.075364416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"43213685c2bf8f3bfe25c8378abbda5535dca6849c86946298b0d978b84fe6ec\" pid:5460 exited_at:{seconds:1747181156 nanos:74879490}" May 14 00:05:58.372314 containerd[1521]: time="2025-05-14T00:05:58.372270343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"06dd66ebc70827e086de8251f3d7df37d67bc40ebc1e0234a14a22c87596db8a\" pid:5485 exited_at:{seconds:1747181158 nanos:371871473}" May 14 00:05:58.374517 kubelet[2670]: E0514 00:05:58.374489 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:58.375066 kubelet[2670]: E0514 00:05:58.374913 2670 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39062->127.0.0.1:43037: write tcp 127.0.0.1:39062->127.0.0.1:43037: write: broken pipe May 14 00:06:00.538930 containerd[1521]: time="2025-05-14T00:06:00.538723743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"879b40e0a6f6edea2b34c63c5eba3c53f6b447a8ebbeef56f3303fc37ceaff7d\" pid:5510 exited_at:{seconds:1747181160 nanos:537709044}" May 14 00:06:01.375060 kubelet[2670]: E0514 00:06:01.374972 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:02.634476 containerd[1521]: time="2025-05-14T00:06:02.634390909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"b58e8f6f8b0a1afb30441f17d3bf8ffdd0332fedced1c04662eaa87e7f916db9\" pid:5535 exited_at:{seconds:1747181162 nanos:633930862}" May 14 00:06:04.765811 containerd[1521]: time="2025-05-14T00:06:04.765750243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"964a2ff4d16b6af4e362880b7501c9cd06c0579c9bee3e841277c43265728f4d\" pid:5560 exited_at:{seconds:1747181164 nanos:765376714}" May 14 00:06:06.890275 containerd[1521]: time="2025-05-14T00:06:06.890184637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"df5ec6299b5fac698300fe827528d4141289b0bc5f1010c9f648891b75df39a9\" pid:5583 exited_at:{seconds:1747181166 nanos:889818383}" May 14 00:06:08.996915 containerd[1521]: time="2025-05-14T00:06:08.996856267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8777ca3cc8a59b7860219a3d0a95cb4c5b69803399905c9a85cd59115aa2246\" id:\"6fd253d3b709c8f5458077a1c9b8d288710b20577bef701314bfc40d3bc18ed6\" pid:5607 exited_at:{seconds:1747181168 nanos:996413765}" May 14 00:06:09.013895 sshd[4583]: Connection closed by 10.0.0.1 port 54750 May 14 00:06:09.019099 systemd[1]: sshd@36-10.0.0.109:22-10.0.0.1:54750.service: Deactivated successfully. May 14 00:06:09.014342 sshd-session[4580]: pam_unix(sshd:session): session closed for user core May 14 00:06:09.021537 systemd[1]: session-36.scope: Deactivated successfully. May 14 00:06:09.022616 systemd-logind[1502]: Session 36 logged out. Waiting for processes to exit. May 14 00:06:09.023852 systemd-logind[1502]: Removed session 36. May 14 00:06:09.381790 containerd[1521]: time="2025-05-14T00:06:09.381742273Z" level=info msg="StopPodSandbox for \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\"" May 14 00:06:09.381953 containerd[1521]: time="2025-05-14T00:06:09.381903303Z" level=info msg="TearDown network for sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" successfully" May 14 00:06:09.381953 containerd[1521]: time="2025-05-14T00:06:09.381919133Z" level=info msg="StopPodSandbox for \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" returns successfully" May 14 00:06:09.382277 containerd[1521]: time="2025-05-14T00:06:09.382251302Z" level=info msg="RemovePodSandbox for \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\"" May 14 00:06:09.382337 containerd[1521]: time="2025-05-14T00:06:09.382277773Z" level=info msg="Forcibly stopping sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\"" May 14 00:06:09.382373 containerd[1521]: time="2025-05-14T00:06:09.382340554Z" level=info msg="TearDown network for sandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" successfully" May 14 00:06:09.383839 containerd[1521]: time="2025-05-14T00:06:09.383815622Z" level=info msg="Ensure that sandbox afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910 in task-service has been cleanup successfully" May 14 00:06:09.499365 containerd[1521]: time="2025-05-14T00:06:09.499291027Z" level=info msg="RemovePodSandbox \"afc6037050a432a60b879c54676b81a3483abcc863a1d76e256c2f4c12611910\" returns successfully" May 14 00:06:09.499994 containerd[1521]: time="2025-05-14T00:06:09.499969172Z" level=info msg="StopPodSandbox for \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\"" May 14 00:06:09.500151 containerd[1521]: time="2025-05-14T00:06:09.500117888Z" level=info msg="TearDown network for sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" successfully" May 14 00:06:09.500151 containerd[1521]: time="2025-05-14T00:06:09.500143598Z" level=info msg="StopPodSandbox for \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" returns successfully" May 14 00:06:09.500809 containerd[1521]: time="2025-05-14T00:06:09.500428166Z" level=info msg="RemovePodSandbox for \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\"" May 14 00:06:09.500809 containerd[1521]: time="2025-05-14T00:06:09.500455017Z" level=info msg="Forcibly stopping sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\"" May 14 00:06:09.500809 containerd[1521]: time="2025-05-14T00:06:09.500523819Z" level=info msg="TearDown network for sandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" successfully" May 14 00:06:09.502442 containerd[1521]: time="2025-05-14T00:06:09.502398576Z" level=info msg="Ensure that sandbox 6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041 in task-service has been cleanup successfully" May 14 00:06:09.628543 containerd[1521]: time="2025-05-14T00:06:09.628476984Z" level=info msg="RemovePodSandbox \"6156f16a1a0777fb0419faf07563a703e9073b2a3f9d83fcbcc7dc97e7572041\" returns successfully"