May 13 23:57:04.945579 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:19:41 -00 2025 May 13 23:57:04.945611 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:57:04.945628 kernel: BIOS-provided physical RAM map: May 13 23:57:04.945637 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 23:57:04.945646 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 23:57:04.945655 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 23:57:04.945665 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 23:57:04.945674 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 23:57:04.945683 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 23:57:04.945695 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 23:57:04.945704 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:57:04.945713 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 23:57:04.945736 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:57:04.945746 kernel: NX (Execute Disable) protection: active May 13 23:57:04.945757 kernel: APIC: Static calls initialized May 13 23:57:04.945775 kernel: SMBIOS 2.8 present. May 13 23:57:04.945785 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 23:57:04.945795 kernel: Hypervisor detected: KVM May 13 23:57:04.945804 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:57:04.945814 kernel: kvm-clock: using sched offset of 3375159975 cycles May 13 23:57:04.945824 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:57:04.945834 kernel: tsc: Detected 2794.748 MHz processor May 13 23:57:04.945844 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:57:04.945854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:57:04.945864 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 23:57:04.945878 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 23:57:04.945888 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:57:04.945898 kernel: Using GB pages for direct mapping May 13 23:57:04.945916 kernel: ACPI: Early table checksum verification disabled May 13 23:57:04.945927 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 23:57:04.945936 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.945946 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.945956 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.945966 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 23:57:04.945980 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.945990 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.946000 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.946009 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:04.946019 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 23:57:04.946030 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 23:57:04.946045 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 23:57:04.946059 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 23:57:04.946069 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 23:57:04.946079 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 23:57:04.946089 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 23:57:04.946100 kernel: No NUMA configuration found May 13 23:57:04.946110 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 23:57:04.946120 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 23:57:04.946134 kernel: Zone ranges: May 13 23:57:04.946144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:57:04.946155 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 23:57:04.946165 kernel: Normal empty May 13 23:57:04.946175 kernel: Movable zone start for each node May 13 23:57:04.946185 kernel: Early memory node ranges May 13 23:57:04.946195 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 23:57:04.946205 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 23:57:04.946215 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 23:57:04.946229 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:57:04.946239 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:57:04.946250 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 23:57:04.946260 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:57:04.946270 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:57:04.946280 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:57:04.946291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:57:04.946301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:57:04.946311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:57:04.946325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:57:04.946335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:57:04.946345 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:57:04.946356 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:57:04.946366 kernel: TSC deadline timer available May 13 23:57:04.946376 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 23:57:04.946386 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:57:04.946396 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 23:57:04.946411 kernel: kvm-guest: setup PV sched yield May 13 23:57:04.946421 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 23:57:04.946435 kernel: Booting paravirtualized kernel on KVM May 13 23:57:04.946446 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:57:04.946456 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 23:57:04.946466 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 23:57:04.946476 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 23:57:04.946486 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 23:57:04.946496 kernel: kvm-guest: PV spinlocks enabled May 13 23:57:04.946506 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:57:04.946518 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:57:04.946532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:57:04.946542 kernel: random: crng init done May 13 23:57:04.946553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:57:04.946563 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:57:04.946573 kernel: Fallback order for Node 0: 0 May 13 23:57:04.946583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 23:57:04.946593 kernel: Policy zone: DMA32 May 13 23:57:04.946604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:57:04.946619 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43480K init, 1596K bss, 138948K reserved, 0K cma-reserved) May 13 23:57:04.946629 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:57:04.946640 kernel: ftrace: allocating 37918 entries in 149 pages May 13 23:57:04.946650 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:57:04.946660 kernel: Dynamic Preempt: voluntary May 13 23:57:04.946670 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:57:04.946681 kernel: rcu: RCU event tracing is enabled. May 13 23:57:04.946692 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:57:04.946702 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:57:04.946716 kernel: Rude variant of Tasks RCU enabled. May 13 23:57:04.946741 kernel: Tracing variant of Tasks RCU enabled. May 13 23:57:04.946751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:57:04.946761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:57:04.946771 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 23:57:04.946782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:57:04.946792 kernel: Console: colour VGA+ 80x25 May 13 23:57:04.946802 kernel: printk: console [ttyS0] enabled May 13 23:57:04.946812 kernel: ACPI: Core revision 20230628 May 13 23:57:04.946827 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:57:04.946837 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:57:04.946848 kernel: x2apic enabled May 13 23:57:04.946858 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:57:04.946868 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 23:57:04.946879 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 23:57:04.946889 kernel: kvm-guest: setup PV IPIs May 13 23:57:04.946937 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:57:04.946959 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 23:57:04.946989 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 23:57:04.947018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 23:57:04.947055 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 23:57:04.947089 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 23:57:04.947101 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:57:04.947112 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:57:04.947123 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:57:04.947139 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 23:57:04.947150 kernel: RETBleed: Mitigation: untrained return thunk May 13 23:57:04.947164 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:57:04.947175 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:57:04.947186 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 23:57:04.947197 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 23:57:04.947209 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 23:57:04.947220 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:57:04.947230 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:57:04.947245 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:57:04.947256 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:57:04.947267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 23:57:04.947278 kernel: Freeing SMP alternatives memory: 32K May 13 23:57:04.947288 kernel: pid_max: default: 32768 minimum: 301 May 13 23:57:04.947299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:57:04.947310 kernel: landlock: Up and running. May 13 23:57:04.947321 kernel: SELinux: Initializing. May 13 23:57:04.947332 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:57:04.947346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:57:04.947357 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 23:57:04.947368 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:04.947379 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:04.947390 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:04.947401 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 23:57:04.947411 kernel: ... version: 0 May 13 23:57:04.947422 kernel: ... bit width: 48 May 13 23:57:04.947436 kernel: ... generic registers: 6 May 13 23:57:04.947447 kernel: ... value mask: 0000ffffffffffff May 13 23:57:04.947457 kernel: ... max period: 00007fffffffffff May 13 23:57:04.947468 kernel: ... fixed-purpose events: 0 May 13 23:57:04.947478 kernel: ... event mask: 000000000000003f May 13 23:57:04.947489 kernel: signal: max sigframe size: 1776 May 13 23:57:04.947500 kernel: rcu: Hierarchical SRCU implementation. May 13 23:57:04.947511 kernel: rcu: Max phase no-delay instances is 400. May 13 23:57:04.947521 kernel: smp: Bringing up secondary CPUs ... May 13 23:57:04.947532 kernel: smpboot: x86: Booting SMP configuration: May 13 23:57:04.947546 kernel: .... node #0, CPUs: #1 #2 #3 May 13 23:57:04.947557 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:57:04.947568 kernel: smpboot: Max logical packages: 1 May 13 23:57:04.947578 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 23:57:04.947589 kernel: devtmpfs: initialized May 13 23:57:04.947600 kernel: x86/mm: Memory block size: 128MB May 13 23:57:04.947611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:57:04.947622 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:57:04.947632 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:57:04.947647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:57:04.947658 kernel: audit: initializing netlink subsys (disabled) May 13 23:57:04.947669 kernel: audit: type=2000 audit(1747180623.722:1): state=initialized audit_enabled=0 res=1 May 13 23:57:04.947679 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:57:04.947690 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:57:04.947700 kernel: cpuidle: using governor menu May 13 23:57:04.947711 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:57:04.947813 kernel: dca service started, version 1.12.1 May 13 23:57:04.947825 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 23:57:04.947840 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 23:57:04.947851 kernel: PCI: Using configuration type 1 for base access May 13 23:57:04.947862 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:57:04.947873 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:57:04.947883 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:57:04.947894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:57:04.947912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:57:04.947923 kernel: ACPI: Added _OSI(Module Device) May 13 23:57:04.947934 kernel: ACPI: Added _OSI(Processor Device) May 13 23:57:04.947949 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:57:04.947959 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:57:04.947970 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:57:04.947981 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:57:04.947992 kernel: ACPI: Interpreter enabled May 13 23:57:04.948003 kernel: ACPI: PM: (supports S0 S3 S5) May 13 23:57:04.948013 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:57:04.948025 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:57:04.948035 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:57:04.948049 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 23:57:04.948060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:57:04.948427 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:57:04.948656 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 23:57:04.948863 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 23:57:04.948879 kernel: PCI host bridge to bus 0000:00 May 13 23:57:04.949063 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:57:04.949225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:57:04.949374 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:57:04.949520 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 23:57:04.949665 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 23:57:04.949888 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 23:57:04.950061 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:57:04.950263 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 23:57:04.950457 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 23:57:04.950619 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 23:57:04.950844 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 23:57:04.951032 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 23:57:04.951197 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:57:04.951381 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:57:04.951554 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 23:57:04.951715 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 23:57:04.951917 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 23:57:04.952110 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 23:57:04.952274 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 23:57:04.952436 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 23:57:04.952598 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 23:57:04.952815 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:57:04.952993 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 23:57:04.953160 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 23:57:04.953324 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 23:57:04.953487 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 23:57:04.953667 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 23:57:04.953932 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 23:57:04.954121 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 23:57:04.954282 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 23:57:04.954439 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 23:57:04.954616 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 23:57:04.954803 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 23:57:04.954819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:57:04.954831 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:57:04.954848 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:57:04.954859 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:57:04.954870 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 23:57:04.954880 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 23:57:04.954891 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 23:57:04.954911 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 23:57:04.954922 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 23:57:04.954933 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 23:57:04.954943 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 23:57:04.954958 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 23:57:04.954969 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 23:57:04.954980 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 23:57:04.954990 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 23:57:04.955001 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 23:57:04.955011 kernel: iommu: Default domain type: Translated May 13 23:57:04.955022 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:57:04.955033 kernel: PCI: Using ACPI for IRQ routing May 13 23:57:04.955043 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:57:04.955058 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 23:57:04.955069 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 23:57:04.955235 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 23:57:04.955371 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 23:57:04.955501 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:57:04.955512 kernel: vgaarb: loaded May 13 23:57:04.955520 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:57:04.955528 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:57:04.955541 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:57:04.955549 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:57:04.955557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:57:04.955565 kernel: pnp: PnP ACPI init May 13 23:57:04.955774 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 23:57:04.955788 kernel: pnp: PnP ACPI: found 6 devices May 13 23:57:04.955797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:57:04.955805 kernel: NET: Registered PF_INET protocol family May 13 23:57:04.955818 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:57:04.955826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:57:04.955834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:57:04.955842 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:57:04.955850 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:57:04.955858 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:57:04.955866 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:57:04.955874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:57:04.955882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:57:04.955893 kernel: NET: Registered PF_XDP protocol family May 13 23:57:04.956026 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:57:04.956145 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:57:04.956263 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:57:04.956382 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 23:57:04.956499 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 23:57:04.956617 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 23:57:04.956628 kernel: PCI: CLS 0 bytes, default 64 May 13 23:57:04.956641 kernel: Initialise system trusted keyrings May 13 23:57:04.956649 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:57:04.956656 kernel: Key type asymmetric registered May 13 23:57:04.956664 kernel: Asymmetric key parser 'x509' registered May 13 23:57:04.956672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:57:04.956680 kernel: io scheduler mq-deadline registered May 13 23:57:04.956688 kernel: io scheduler kyber registered May 13 23:57:04.956696 kernel: io scheduler bfq registered May 13 23:57:04.956704 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:57:04.956715 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 23:57:04.956735 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 23:57:04.956743 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 23:57:04.956751 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:57:04.956759 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:57:04.956767 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:57:04.956775 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:57:04.956783 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:57:04.956951 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 23:57:04.956968 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:57:04.957091 kernel: rtc_cmos 00:04: registered as rtc0 May 13 23:57:04.957216 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T23:57:04 UTC (1747180624) May 13 23:57:04.957340 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 23:57:04.957350 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 23:57:04.957358 kernel: NET: Registered PF_INET6 protocol family May 13 23:57:04.957366 kernel: Segment Routing with IPv6 May 13 23:57:04.957374 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:57:04.957386 kernel: NET: Registered PF_PACKET protocol family May 13 23:57:04.957394 kernel: Key type dns_resolver registered May 13 23:57:04.957402 kernel: IPI shorthand broadcast: enabled May 13 23:57:04.957410 kernel: sched_clock: Marking stable (749003238, 113160956)->(884626176, -22461982) May 13 23:57:04.957418 kernel: registered taskstats version 1 May 13 23:57:04.957426 kernel: Loading compiled-in X.509 certificates May 13 23:57:04.957434 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 50ddd1b04864f80ac4ca221f8647fbbda919e0fd' May 13 23:57:04.957441 kernel: Key type .fscrypt registered May 13 23:57:04.957449 kernel: Key type fscrypt-provisioning registered May 13 23:57:04.957460 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:57:04.957468 kernel: ima: Allocated hash algorithm: sha1 May 13 23:57:04.957476 kernel: ima: No architecture policies found May 13 23:57:04.957484 kernel: clk: Disabling unused clocks May 13 23:57:04.957492 kernel: Freeing unused kernel image (initmem) memory: 43480K May 13 23:57:04.957499 kernel: Write protecting the kernel read-only data: 38912k May 13 23:57:04.957507 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 13 23:57:04.957515 kernel: Run /init as init process May 13 23:57:04.957523 kernel: with arguments: May 13 23:57:04.957533 kernel: /init May 13 23:57:04.957541 kernel: with environment: May 13 23:57:04.957549 kernel: HOME=/ May 13 23:57:04.957557 kernel: TERM=linux May 13 23:57:04.957564 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:57:04.957573 systemd[1]: Successfully made /usr/ read-only. May 13 23:57:04.957585 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:04.957594 systemd[1]: Detected virtualization kvm. May 13 23:57:04.957606 systemd[1]: Detected architecture x86-64. May 13 23:57:04.957614 systemd[1]: Running in initrd. May 13 23:57:04.957623 systemd[1]: No hostname configured, using default hostname. May 13 23:57:04.957631 systemd[1]: Hostname set to . May 13 23:57:04.957640 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:04.957648 systemd[1]: Queued start job for default target initrd.target. May 13 23:57:04.957657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:04.957666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:04.957678 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:57:04.957701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:04.957713 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:57:04.957783 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:57:04.957794 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:57:04.957808 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:57:04.957817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:04.957825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:04.957834 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:04.957843 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:04.957851 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:04.957860 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:04.957869 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:04.957880 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:04.957889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:57:04.957904 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:57:04.957913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:04.957921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:04.957930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:04.957939 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:04.957948 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:57:04.957956 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:04.957971 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:57:04.957980 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:57:04.957990 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:04.957998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:04.958007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:04.958019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:57:04.958031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:04.958048 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:57:04.958090 systemd-journald[194]: Collecting audit messages is disabled. May 13 23:57:04.958124 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:04.958140 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:04.958152 systemd-journald[194]: Journal started May 13 23:57:04.958181 systemd-journald[194]: Runtime Journal (/run/log/journal/c9f0dd6339154932b58a52b60b0462b3) is 6M, max 48.4M, 42.3M free. May 13 23:57:04.948715 systemd-modules-load[195]: Inserted module 'overlay' May 13 23:57:04.986016 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:57:04.986036 kernel: Bridge firewalling registered May 13 23:57:04.975427 systemd-modules-load[195]: Inserted module 'br_netfilter' May 13 23:57:04.988763 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:04.989145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:04.991482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:05.012911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:05.032583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:05.035171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:05.038321 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:05.046296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:05.047967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:05.048357 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:05.058850 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:57:05.077675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:05.080967 dracut-cmdline[230]: dracut-dracut-053 May 13 23:57:05.080967 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:57:05.081697 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:05.121950 systemd-resolved[250]: Positive Trust Anchors: May 13 23:57:05.121963 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:05.121993 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:05.124596 systemd-resolved[250]: Defaulting to hostname 'linux'. May 13 23:57:05.125761 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:05.160305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:05.178751 kernel: SCSI subsystem initialized May 13 23:57:05.187761 kernel: Loading iSCSI transport class v2.0-870. May 13 23:57:05.198745 kernel: iscsi: registered transport (tcp) May 13 23:57:05.219932 kernel: iscsi: registered transport (qla4xxx) May 13 23:57:05.219969 kernel: QLogic iSCSI HBA Driver May 13 23:57:05.273297 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:57:05.296011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:57:05.321744 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:57:05.321779 kernel: device-mapper: uevent: version 1.0.3 May 13 23:57:05.323317 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:57:05.376747 kernel: raid6: avx2x4 gen() 30344 MB/s May 13 23:57:05.393744 kernel: raid6: avx2x2 gen() 30770 MB/s May 13 23:57:05.410829 kernel: raid6: avx2x1 gen() 25872 MB/s May 13 23:57:05.410850 kernel: raid6: using algorithm avx2x2 gen() 30770 MB/s May 13 23:57:05.452744 kernel: raid6: .... xor() 19874 MB/s, rmw enabled May 13 23:57:05.452767 kernel: raid6: using avx2x2 recovery algorithm May 13 23:57:05.472749 kernel: xor: automatically using best checksumming function avx May 13 23:57:05.620767 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:57:05.636149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:05.650061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:05.665697 systemd-udevd[416]: Using default interface naming scheme 'v255'. May 13 23:57:05.671597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:05.685927 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:57:05.700020 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation May 13 23:57:05.738495 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:05.752948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:05.820752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:05.831045 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:57:05.846923 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:57:05.850842 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:05.853412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:05.855688 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:05.870741 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 23:57:05.872925 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:57:05.868129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:57:05.877464 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:57:05.877491 kernel: GPT:9289727 != 19775487 May 13 23:57:05.877508 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:57:05.877518 kernel: GPT:9289727 != 19775487 May 13 23:57:05.877811 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:57:05.878992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:05.880047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:05.887778 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:57:05.892744 kernel: libata version 3.00 loaded. May 13 23:57:05.900743 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:57:05.900775 kernel: AES CTR mode by8 optimization enabled May 13 23:57:05.902012 kernel: ahci 0000:00:1f.2: version 3.0 May 13 23:57:05.902261 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 23:57:05.904735 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 23:57:05.904951 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 23:57:05.911023 kernel: scsi host0: ahci May 13 23:57:05.911224 kernel: scsi host1: ahci May 13 23:57:05.913361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:05.914589 kernel: scsi host2: ahci May 13 23:57:05.915607 kernel: scsi host3: ahci May 13 23:57:05.913439 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:05.918122 kernel: scsi host4: ahci May 13 23:57:05.919209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:05.922737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:05.922798 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:05.927710 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) May 13 23:57:05.928751 kernel: scsi host5: ahci May 13 23:57:05.954790 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 23:57:05.954817 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 23:57:05.954829 kernel: BTRFS: device fsid 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (476) May 13 23:57:05.954846 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 23:57:05.959444 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 23:57:05.959466 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 23:57:05.959477 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 23:57:05.959570 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:05.970886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:05.971574 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:05.991108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:57:06.013265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:57:06.033267 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:57:06.033545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:57:06.036982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:06.048985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:57:06.065942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:57:06.068512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:06.093311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:06.137098 disk-uuid[561]: Primary Header is updated. May 13 23:57:06.137098 disk-uuid[561]: Secondary Entries is updated. May 13 23:57:06.137098 disk-uuid[561]: Secondary Header is updated. May 13 23:57:06.146752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:06.150764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:06.266764 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 23:57:06.266834 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:57:06.267757 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 23:57:06.267823 kernel: ata3.00: applying bridge limits May 13 23:57:06.268841 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 23:57:06.269749 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 23:57:06.269766 kernel: ata3.00: configured for UDMA/100 May 13 23:57:06.270747 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:57:06.271737 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:57:06.273741 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 23:57:06.321746 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 23:57:06.321991 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:57:06.335790 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 23:57:07.152549 disk-uuid[571]: The operation has completed successfully. May 13 23:57:07.153981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:07.190495 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:57:07.190629 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:57:07.231888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:57:07.238343 sh[594]: Success May 13 23:57:07.251748 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 23:57:07.290941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:57:07.304818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:57:07.308172 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:57:07.325417 kernel: BTRFS info (device dm-0): first mount of filesystem 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 May 13 23:57:07.325455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:07.325467 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:57:07.326741 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:57:07.328389 kernel: BTRFS info (device dm-0): using free space tree May 13 23:57:07.332691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:57:07.333600 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:57:07.348038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:57:07.350443 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:57:07.370230 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:57:07.370265 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:07.370283 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:07.374749 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:07.380751 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:57:07.385480 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:57:07.392144 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:57:07.546956 ignition[682]: Ignition 2.20.0 May 13 23:57:07.546971 ignition[682]: Stage: fetch-offline May 13 23:57:07.547771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:07.547018 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:07.547032 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:07.547172 ignition[682]: parsed url from cmdline: "" May 13 23:57:07.547176 ignition[682]: no config URL provided May 13 23:57:07.547182 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:07.547193 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:07.555908 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:07.547221 ignition[682]: op(1): [started] loading QEMU firmware config module May 13 23:57:07.547226 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:57:07.559558 ignition[682]: op(1): [finished] loading QEMU firmware config module May 13 23:57:07.598595 ignition[682]: parsing config with SHA512: dcbcd56cd27b88e26421fd16f5a5b3b466f4aa62fb44443b1bfc77b76bb19b0b79783f71ab7626d66f70a4a52760180a2b04e03534dbffb513834a1ca50769d8 May 13 23:57:07.605180 unknown[682]: fetched base config from "system" May 13 23:57:07.605712 ignition[682]: fetch-offline: fetch-offline passed May 13 23:57:07.605193 unknown[682]: fetched user config from "qemu" May 13 23:57:07.605838 ignition[682]: Ignition finished successfully May 13 23:57:07.611454 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:07.620350 systemd-networkd[793]: lo: Link UP May 13 23:57:07.620361 systemd-networkd[793]: lo: Gained carrier May 13 23:57:07.622282 systemd-networkd[793]: Enumeration completed May 13 23:57:07.622664 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:07.622668 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:07.623603 systemd-networkd[793]: eth0: Link UP May 13 23:57:07.623608 systemd-networkd[793]: eth0: Gained carrier May 13 23:57:07.623615 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:07.626643 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:07.632029 systemd[1]: Reached target network.target - Network. May 13 23:57:07.633734 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:57:07.641779 systemd-networkd[793]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:57:07.641859 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:57:07.689215 ignition[797]: Ignition 2.20.0 May 13 23:57:07.689228 ignition[797]: Stage: kargs May 13 23:57:07.689391 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:07.689405 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:07.693436 ignition[797]: kargs: kargs passed May 13 23:57:07.693488 ignition[797]: Ignition finished successfully May 13 23:57:07.697887 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:57:07.709876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:57:07.731976 ignition[806]: Ignition 2.20.0 May 13 23:57:07.731989 ignition[806]: Stage: disks May 13 23:57:07.732156 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:07.732170 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:07.736023 ignition[806]: disks: disks passed May 13 23:57:07.736077 ignition[806]: Ignition finished successfully May 13 23:57:07.739028 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:57:07.741289 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:57:07.743491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:57:07.745945 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:07.747945 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:07.749988 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:07.762941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:57:07.782892 systemd-fsck[816]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:57:07.789469 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:57:07.801892 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:57:07.888862 kernel: EXT4-fs (vda9): mounted filesystem cf173df9-f79a-4e29-be52-c2936b0d4e57 r/w with ordered data mode. Quota mode: none. May 13 23:57:07.889521 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:57:07.891084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:57:07.901857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:07.903944 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:57:07.905244 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:57:07.911203 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (824) May 13 23:57:07.911238 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:57:07.905289 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:57:07.918329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:07.918350 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:07.918362 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:07.905317 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:07.912374 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:57:07.920064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:07.923880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:57:07.961252 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:57:07.966573 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory May 13 23:57:07.970359 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:57:07.974128 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:57:08.064909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:57:08.077805 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:57:08.079496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:57:08.086740 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:57:08.107988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:57:08.114606 ignition[937]: INFO : Ignition 2.20.0 May 13 23:57:08.114606 ignition[937]: INFO : Stage: mount May 13 23:57:08.116363 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:08.116363 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:08.118972 ignition[937]: INFO : mount: mount passed May 13 23:57:08.119697 ignition[937]: INFO : Ignition finished successfully May 13 23:57:08.122041 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:57:08.130832 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:57:08.324523 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:57:08.338934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:08.363741 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (950) May 13 23:57:08.363784 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:57:08.365028 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:08.365042 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:08.367756 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:08.369447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:08.422451 ignition[967]: INFO : Ignition 2.20.0 May 13 23:57:08.422451 ignition[967]: INFO : Stage: files May 13 23:57:08.424396 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:08.424396 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:08.424396 ignition[967]: DEBUG : files: compiled without relabeling support, skipping May 13 23:57:08.427837 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:57:08.427837 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:57:08.431465 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:57:08.432878 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:57:08.432878 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:57:08.432033 unknown[967]: wrote ssh authorized keys file for user: core May 13 23:57:08.436716 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:08.436716 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:57:08.525963 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:57:09.105193 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:09.107315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:09.107315 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 23:57:09.291970 systemd-networkd[793]: eth0: Gained IPv6LL May 13 23:57:09.477233 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:57:09.574066 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:09.575906 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:09.589493 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 23:57:09.874820 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:57:10.546205 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:10.546205 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 23:57:10.550651 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:57:10.570245 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:57:10.574917 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:57:10.576527 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:57:10.576527 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 23:57:10.576527 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:57:10.576527 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:10.576527 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:10.576527 ignition[967]: INFO : files: files passed May 13 23:57:10.576527 ignition[967]: INFO : Ignition finished successfully May 13 23:57:10.578034 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:57:10.589837 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:57:10.591734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:57:10.593437 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:57:10.593544 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:57:10.602108 initrd-setup-root-after-ignition[996]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:57:10.605178 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:10.605178 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:10.608599 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:10.611912 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:10.614831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:57:10.620888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:57:10.645104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:57:10.645248 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:57:10.647935 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:57:10.649733 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:57:10.651829 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:57:10.655846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:57:10.670911 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:10.678919 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:57:10.689337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:10.690621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:10.692871 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:57:10.694869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:57:10.694990 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:10.697102 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:57:10.698828 systemd[1]: Stopped target basic.target - Basic System. May 13 23:57:10.700857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:57:10.702885 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:10.704169 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:57:10.706100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:57:10.708169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:10.710479 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:57:10.712551 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:57:10.714523 systemd[1]: Stopped target swap.target - Swaps. May 13 23:57:10.716513 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:57:10.716632 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:10.718809 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:10.720576 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:10.722482 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:57:10.722580 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:10.724654 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:57:10.724788 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:57:10.726952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:57:10.727064 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:10.729009 systemd[1]: Stopped target paths.target - Path Units. May 13 23:57:10.730939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:57:10.735832 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:10.737554 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:57:10.739475 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:57:10.741331 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:57:10.741442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:10.743364 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:57:10.743458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:10.745898 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:57:10.746056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:10.747983 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:57:10.748096 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:57:10.757889 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:57:10.759827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:57:10.761028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:57:10.761153 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:10.763476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:57:10.763626 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:10.770207 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:57:10.770325 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:57:10.774252 ignition[1022]: INFO : Ignition 2.20.0 May 13 23:57:10.774252 ignition[1022]: INFO : Stage: umount May 13 23:57:10.776020 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:10.776020 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:10.779277 ignition[1022]: INFO : umount: umount passed May 13 23:57:10.780264 ignition[1022]: INFO : Ignition finished successfully May 13 23:57:10.781514 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:57:10.781644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:57:10.783804 systemd[1]: Stopped target network.target - Network. May 13 23:57:10.785565 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:57:10.785622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:57:10.787448 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:57:10.787499 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:57:10.789624 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:57:10.789685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:57:10.791567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:57:10.791616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:57:10.794163 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:57:10.796364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:57:10.799462 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:57:10.803866 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:57:10.804000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:57:10.809612 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:57:10.809959 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:57:10.810107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:57:10.813062 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:57:10.813914 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:57:10.813987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:10.821923 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:57:10.823904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:57:10.823982 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:10.826415 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:57:10.826469 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:10.828967 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:57:10.829018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:57:10.830949 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:57:10.831001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:10.833384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:10.836478 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:57:10.836548 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:10.844496 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:57:10.844635 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:57:10.862890 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:57:10.863077 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:10.865466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:57:10.865521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:57:10.867613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:57:10.867655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:10.869621 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:57:10.869683 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:10.871798 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:57:10.871849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:57:10.873949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:10.874010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:10.891964 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:57:10.894268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:57:10.894344 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:10.898030 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:57:10.899185 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:10.901842 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:57:10.902863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:10.905265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:10.905327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:10.909574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:57:10.911011 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:10.912736 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:57:10.913883 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:57:10.982376 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:57:10.983477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:57:10.985674 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:57:10.987749 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:57:10.987811 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:57:11.007873 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:57:11.018152 systemd[1]: Switching root. May 13 23:57:11.051006 systemd-journald[194]: Journal stopped May 13 23:57:12.314883 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 13 23:57:12.314960 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:57:12.314982 kernel: SELinux: policy capability open_perms=1 May 13 23:57:12.314994 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:57:12.315006 kernel: SELinux: policy capability always_check_network=0 May 13 23:57:12.315018 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:57:12.315030 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:57:12.315046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:57:12.315058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:57:12.315070 kernel: audit: type=1403 audit(1747180631.403:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:57:12.315091 systemd[1]: Successfully loaded SELinux policy in 40.111ms. May 13 23:57:12.315120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.931ms. May 13 23:57:12.315134 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:12.315147 systemd[1]: Detected virtualization kvm. May 13 23:57:12.315159 systemd[1]: Detected architecture x86-64. May 13 23:57:12.315172 systemd[1]: Detected first boot. May 13 23:57:12.315187 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:12.315199 zram_generator::config[1067]: No configuration found. May 13 23:57:12.315213 kernel: Guest personality initialized and is inactive May 13 23:57:12.315225 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:57:12.315237 kernel: Initialized host personality May 13 23:57:12.315249 kernel: NET: Registered PF_VSOCK protocol family May 13 23:57:12.315261 systemd[1]: Populated /etc with preset unit settings. May 13 23:57:12.315274 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:57:12.315287 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:57:12.315302 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:57:12.315315 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:57:12.315329 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:57:12.315342 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:57:12.315354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:57:12.315367 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:57:12.315380 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:57:12.315394 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:57:12.315416 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:57:12.315429 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:57:12.315441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:12.315454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:12.315467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:57:12.315479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:57:12.315492 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:57:12.315506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:12.315522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:57:12.315535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:12.315548 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:57:12.315560 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:57:12.315573 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:57:12.315586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:57:12.315599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:12.315620 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:12.315634 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:12.315650 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:12.315662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:57:12.315675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:57:12.315689 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:57:12.315701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:12.315714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:12.315739 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:12.315752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:57:12.315765 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:57:12.315781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:57:12.315794 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:57:12.315807 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.315819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:57:12.315832 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:57:12.315844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:57:12.315857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:57:12.315870 systemd[1]: Reached target machines.target - Containers. May 13 23:57:12.315885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:57:12.315898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:12.315911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:12.315924 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:57:12.315936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:12.315949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:12.315961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:12.315976 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:57:12.315988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:12.316003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:57:12.316016 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:57:12.316034 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:57:12.316047 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:57:12.316059 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:57:12.316073 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:12.316086 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:12.316099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:12.316114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:57:12.316126 kernel: fuse: init (API version 7.39) May 13 23:57:12.316159 systemd-journald[1131]: Collecting audit messages is disabled. May 13 23:57:12.316183 systemd-journald[1131]: Journal started May 13 23:57:12.316206 systemd-journald[1131]: Runtime Journal (/run/log/journal/c9f0dd6339154932b58a52b60b0462b3) is 6M, max 48.4M, 42.3M free. May 13 23:57:12.071441 systemd[1]: Queued start job for default target multi-user.target. May 13 23:57:12.085786 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:57:12.086307 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:57:12.318757 kernel: loop: module loaded May 13 23:57:12.327987 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:57:12.333374 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:57:12.347832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:12.347901 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:57:12.347918 systemd[1]: Stopped verity-setup.service. May 13 23:57:12.347940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.352167 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:12.353010 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:57:12.360229 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:57:12.363966 kernel: ACPI: bus type drm_connector registered May 13 23:57:12.361589 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:57:12.363372 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:57:12.364687 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:57:12.365961 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:57:12.367254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:12.368944 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:57:12.369178 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:57:12.370742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:12.370972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:12.372572 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:12.372830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:12.374457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:12.374690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:12.376259 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:57:12.376475 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:57:12.378004 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:12.378227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:12.379697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:12.381370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:57:12.383156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:57:12.384838 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:57:12.476093 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:57:12.492826 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:57:12.510812 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:57:12.513391 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:57:12.514557 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:57:12.514590 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:12.516708 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:57:12.519204 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:57:12.521546 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:57:12.522798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:12.524071 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:57:12.526992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:57:12.528306 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:12.529692 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:57:12.530986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:12.535022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:12.538103 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:57:12.545926 systemd-journald[1131]: Time spent on flushing to /var/log/journal/c9f0dd6339154932b58a52b60b0462b3 is 24.453ms for 969 entries. May 13 23:57:12.545926 systemd-journald[1131]: System Journal (/var/log/journal/c9f0dd6339154932b58a52b60b0462b3) is 8M, max 195.6M, 187.6M free. May 13 23:57:12.590774 systemd-journald[1131]: Received client request to flush runtime journal. May 13 23:57:12.590835 kernel: loop0: detected capacity change from 0 to 205544 May 13 23:57:12.543239 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:12.547051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:12.548659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:57:12.550185 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:57:12.552467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:57:12.556418 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:57:12.566551 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:57:12.580874 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:57:12.593933 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:57:12.597402 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:57:12.601873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:57:12.611187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:12.614968 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:57:12.615492 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 13 23:57:12.615514 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 13 23:57:12.622694 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:12.630952 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:57:12.633126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:57:12.634033 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:57:12.639929 kernel: loop1: detected capacity change from 0 to 147912 May 13 23:57:12.666376 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:57:12.679040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:12.683793 kernel: loop2: detected capacity change from 0 to 138176 May 13 23:57:12.719391 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. May 13 23:57:12.719414 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. May 13 23:57:12.725353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:12.740755 kernel: loop3: detected capacity change from 0 to 205544 May 13 23:57:12.756792 kernel: loop4: detected capacity change from 0 to 147912 May 13 23:57:12.772751 kernel: loop5: detected capacity change from 0 to 138176 May 13 23:57:12.802918 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:57:12.803629 (sd-merge)[1214]: Merged extensions into '/usr'. May 13 23:57:12.808291 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:57:12.808305 systemd[1]: Reloading... May 13 23:57:12.871755 zram_generator::config[1241]: No configuration found. May 13 23:57:13.016938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:13.021372 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:57:13.082802 systemd[1]: Reloading finished in 273 ms. May 13 23:57:13.225512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:57:13.227498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:57:13.247600 systemd[1]: Starting ensure-sysext.service... May 13 23:57:13.249850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:13.263218 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... May 13 23:57:13.263238 systemd[1]: Reloading... May 13 23:57:13.285738 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:57:13.286383 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:57:13.287445 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:57:13.287755 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 13 23:57:13.287836 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 13 23:57:13.291995 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:13.292007 systemd-tmpfiles[1280]: Skipping /boot May 13 23:57:13.317544 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:13.317704 systemd-tmpfiles[1280]: Skipping /boot May 13 23:57:13.345991 zram_generator::config[1309]: No configuration found. May 13 23:57:13.465259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:13.558884 systemd[1]: Reloading finished in 295 ms. May 13 23:57:13.575422 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:57:13.594763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:13.614124 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:13.617002 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:57:13.619562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:57:13.624587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:13.627785 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:13.632646 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:57:13.636860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.637046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:13.638910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:13.643709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:13.646963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:13.648267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:13.648440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:13.650817 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:57:13.651913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.653036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:13.653560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:13.658473 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:13.658702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:13.660997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:13.661331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:13.670544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.671023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:13.681335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:13.684934 systemd-udevd[1358]: Using default interface naming scheme 'v255'. May 13 23:57:13.686100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:13.688994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:13.691235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:13.691437 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:13.691570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.693409 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:57:13.694805 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:57:13.697446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:13.697554 augenrules[1383]: No rules May 13 23:57:13.697744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:13.700921 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:13.701195 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:13.702975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:13.703219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:13.705064 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:13.705297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:13.711271 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:57:13.715008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:57:13.720284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.733072 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:13.734283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:13.739850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:13.744134 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:13.759099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:13.761597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:13.762858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:13.762972 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:13.764971 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:57:13.766061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:57:13.766171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:13.772217 augenrules[1399]: /sbin/augenrules: No change May 13 23:57:13.770851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:13.773336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:13.775395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:13.783623 systemd[1]: Finished ensure-sysext.service. May 13 23:57:13.787302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:13.787578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:13.801817 augenrules[1443]: No rules May 13 23:57:13.804911 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:13.806006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:13.818978 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:57:13.821026 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:13.821782 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:13.823291 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:13.823635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:13.826984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1436) May 13 23:57:13.826478 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:13.828786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:13.830354 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:57:13.831926 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:57:13.850531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:13.873893 systemd-resolved[1357]: Positive Trust Anchors: May 13 23:57:13.874259 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:13.874335 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:13.880783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 23:57:13.882521 systemd-resolved[1357]: Defaulting to hostname 'linux'. May 13 23:57:13.886037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:13.887455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:13.901745 kernel: ACPI: button: Power Button [PWRF] May 13 23:57:13.914703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:57:13.927945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:57:13.937147 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 13 23:57:13.944752 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 23:57:13.953023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:57:13.960350 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:57:13.961973 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:57:13.969891 systemd-networkd[1450]: lo: Link UP May 13 23:57:13.973962 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 23:57:13.975061 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 23:57:13.970046 systemd-networkd[1450]: lo: Gained carrier May 13 23:57:13.971957 systemd-networkd[1450]: Enumeration completed May 13 23:57:13.972953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:13.974254 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:13.975580 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:13.975586 systemd-networkd[1450]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:13.975640 systemd[1]: Reached target network.target - Network. May 13 23:57:13.979615 systemd-networkd[1450]: eth0: Link UP May 13 23:57:13.979627 systemd-networkd[1450]: eth0: Gained carrier May 13 23:57:13.979641 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:13.988257 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:57:13.994462 systemd-networkd[1450]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:57:13.999488 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. May 13 23:57:14.003601 systemd-timesyncd[1451]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:57:14.003656 systemd-timesyncd[1451]: Initial clock synchronization to Tue 2025-05-13 23:57:13.968177 UTC. May 13 23:57:14.004161 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:57:14.031367 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:57:14.034758 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:57:14.063760 kernel: kvm_amd: TSC scaling supported May 13 23:57:14.063840 kernel: kvm_amd: Nested Virtualization enabled May 13 23:57:14.063881 kernel: kvm_amd: Nested Paging enabled May 13 23:57:14.063895 kernel: kvm_amd: LBR virtualization supported May 13 23:57:14.063915 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 23:57:14.063927 kernel: kvm_amd: Virtual GIF supported May 13 23:57:14.086758 kernel: EDAC MC: Ver: 3.0.0 May 13 23:57:14.105296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:14.115267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:57:14.127178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:57:14.136804 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:14.167364 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:57:14.169001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:14.170134 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:14.171317 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:57:14.172590 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:57:14.174073 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:57:14.175300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:57:14.176588 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:57:14.177914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:57:14.177952 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:14.178916 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:14.180940 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:57:14.183819 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:57:14.187589 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:57:14.189058 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:57:14.190321 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:57:14.194588 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:57:14.196431 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:57:14.198980 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:57:14.200693 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:57:14.201878 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:14.202855 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:14.203841 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:14.203875 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:14.205004 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:57:14.207127 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:57:14.211043 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:57:14.215258 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:57:14.216825 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:14.216404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:57:14.218532 jq[1490]: false May 13 23:57:14.220386 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:57:14.225174 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:57:14.229911 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:57:14.234626 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:57:14.238772 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:57:14.238840 extend-filesystems[1491]: Found loop3 May 13 23:57:14.238840 extend-filesystems[1491]: Found loop4 May 13 23:57:14.238840 extend-filesystems[1491]: Found loop5 May 13 23:57:14.242819 extend-filesystems[1491]: Found sr0 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda May 13 23:57:14.242819 extend-filesystems[1491]: Found vda1 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda2 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda3 May 13 23:57:14.242819 extend-filesystems[1491]: Found usr May 13 23:57:14.242819 extend-filesystems[1491]: Found vda4 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda6 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda7 May 13 23:57:14.242819 extend-filesystems[1491]: Found vda9 May 13 23:57:14.242819 extend-filesystems[1491]: Checking size of /dev/vda9 May 13 23:57:14.246183 dbus-daemon[1489]: [system] SELinux support is enabled May 13 23:57:14.244009 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:57:14.260836 extend-filesystems[1491]: Resized partition /dev/vda9 May 13 23:57:14.244479 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:57:14.261823 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) May 13 23:57:14.249493 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:57:14.266106 jq[1509]: true May 13 23:57:14.259152 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:57:14.263565 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:57:14.269096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1408) May 13 23:57:14.268767 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:57:14.270741 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:57:14.278369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:57:14.278673 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:57:14.279071 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:57:14.279335 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:57:14.284333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:57:14.284634 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:57:14.296880 jq[1516]: true May 13 23:57:14.300750 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:57:14.306788 update_engine[1505]: I20250513 23:57:14.306638 1505 main.cc:92] Flatcar Update Engine starting May 13 23:57:14.323803 update_engine[1505]: I20250513 23:57:14.312865 1505 update_check_scheduler.cc:74] Next update check in 7m40s May 13 23:57:14.314468 (ntainerd)[1517]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:57:14.327251 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:57:14.327251 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:57:14.327251 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:57:14.331771 extend-filesystems[1491]: Resized filesystem in /dev/vda9 May 13 23:57:14.333795 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:57:14.334098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:57:14.338614 tar[1515]: linux-amd64/helm May 13 23:57:14.339904 systemd[1]: Started update-engine.service - Update Engine. May 13 23:57:14.342706 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:57:14.343107 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:57:14.345285 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:57:14.345310 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:57:14.355903 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:57:14.363126 bash[1544]: Updated "/home/core/.ssh/authorized_keys" May 13 23:57:14.364567 systemd-logind[1499]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:57:14.364593 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:57:14.365632 systemd-logind[1499]: New seat seat0. May 13 23:57:14.366456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:57:14.371497 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:57:14.373858 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:57:14.388649 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:57:14.506180 containerd[1517]: time="2025-05-13T23:57:14.506005016Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 23:57:14.528687 containerd[1517]: time="2025-05-13T23:57:14.528642547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.530541 containerd[1517]: time="2025-05-13T23:57:14.530486095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 23:57:14.530541 containerd[1517]: time="2025-05-13T23:57:14.530517093Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 23:57:14.530541 containerd[1517]: time="2025-05-13T23:57:14.530543343Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 23:57:14.530745 containerd[1517]: time="2025-05-13T23:57:14.530708743Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 23:57:14.530773 containerd[1517]: time="2025-05-13T23:57:14.530746223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.530844 containerd[1517]: time="2025-05-13T23:57:14.530824720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:57:14.530844 containerd[1517]: time="2025-05-13T23:57:14.530841362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.531122 containerd[1517]: time="2025-05-13T23:57:14.531091972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:57:14.531122 containerd[1517]: time="2025-05-13T23:57:14.531113081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.531165 containerd[1517]: time="2025-05-13T23:57:14.531126096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:57:14.531165 containerd[1517]: time="2025-05-13T23:57:14.531138108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.531259 containerd[1517]: time="2025-05-13T23:57:14.531234329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.531508 containerd[1517]: time="2025-05-13T23:57:14.531479949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 23:57:14.531688 containerd[1517]: time="2025-05-13T23:57:14.531662181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:57:14.531688 containerd[1517]: time="2025-05-13T23:57:14.531679574Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 23:57:14.531817 containerd[1517]: time="2025-05-13T23:57:14.531792255Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 23:57:14.531875 containerd[1517]: time="2025-05-13T23:57:14.531856105Z" level=info msg="metadata content store policy set" policy=shared May 13 23:57:14.537689 containerd[1517]: time="2025-05-13T23:57:14.537654030Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 23:57:14.537788 containerd[1517]: time="2025-05-13T23:57:14.537709484Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 23:57:14.537788 containerd[1517]: time="2025-05-13T23:57:14.537773454Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 23:57:14.537846 containerd[1517]: time="2025-05-13T23:57:14.537791688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 23:57:14.537846 containerd[1517]: time="2025-05-13T23:57:14.537806997Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 23:57:14.537983 containerd[1517]: time="2025-05-13T23:57:14.537956458Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 23:57:14.538276 containerd[1517]: time="2025-05-13T23:57:14.538246231Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 23:57:14.538392 containerd[1517]: time="2025-05-13T23:57:14.538367178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 23:57:14.538392 containerd[1517]: time="2025-05-13T23:57:14.538388227Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 23:57:14.538432 containerd[1517]: time="2025-05-13T23:57:14.538403346Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 23:57:14.538432 containerd[1517]: time="2025-05-13T23:57:14.538417492Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538432 containerd[1517]: time="2025-05-13T23:57:14.538430146Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538493 containerd[1517]: time="2025-05-13T23:57:14.538447078Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538493 containerd[1517]: time="2025-05-13T23:57:14.538461124Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538493 containerd[1517]: time="2025-05-13T23:57:14.538475571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538493 containerd[1517]: time="2025-05-13T23:57:14.538489247Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538578 containerd[1517]: time="2025-05-13T23:57:14.538502351Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538578 containerd[1517]: time="2025-05-13T23:57:14.538514975Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 23:57:14.538578 containerd[1517]: time="2025-05-13T23:57:14.538545202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538578 containerd[1517]: time="2025-05-13T23:57:14.538559449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538578 containerd[1517]: time="2025-05-13T23:57:14.538572032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538584526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538597550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538610414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538622206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538635040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538647764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538662752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538676 containerd[1517]: time="2025-05-13T23:57:14.538673613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538687328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538700243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538714459Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538761117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538775263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538785943Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538828744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 23:57:14.538845 containerd[1517]: time="2025-05-13T23:57:14.538844443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538855664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538876403Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538886903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538899817Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538910397Z" level=info msg="NRI interface is disabled by configuration." May 13 23:57:14.539048 containerd[1517]: time="2025-05-13T23:57:14.538921949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 23:57:14.539259 containerd[1517]: time="2025-05-13T23:57:14.539206242Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 23:57:14.539406 containerd[1517]: time="2025-05-13T23:57:14.539274069Z" level=info msg="Connect containerd service" May 13 23:57:14.539406 containerd[1517]: time="2025-05-13T23:57:14.539305648Z" level=info msg="using legacy CRI server" May 13 23:57:14.539406 containerd[1517]: time="2025-05-13T23:57:14.539313894Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:57:14.539464 containerd[1517]: time="2025-05-13T23:57:14.539416777Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 23:57:14.540191 containerd[1517]: time="2025-05-13T23:57:14.540154190Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:57:14.540447 containerd[1517]: time="2025-05-13T23:57:14.540406854Z" level=info msg="Start subscribing containerd event" May 13 23:57:14.540602 containerd[1517]: time="2025-05-13T23:57:14.540512772Z" level=info msg="Start recovering state" May 13 23:57:14.540602 containerd[1517]: time="2025-05-13T23:57:14.540553509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:57:14.540663 containerd[1517]: time="2025-05-13T23:57:14.540616697Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:57:14.540815 containerd[1517]: time="2025-05-13T23:57:14.540798378Z" level=info msg="Start event monitor" May 13 23:57:14.541113 containerd[1517]: time="2025-05-13T23:57:14.540869852Z" level=info msg="Start snapshots syncer" May 13 23:57:14.541113 containerd[1517]: time="2025-05-13T23:57:14.540883227Z" level=info msg="Start cni network conf syncer for default" May 13 23:57:14.541113 containerd[1517]: time="2025-05-13T23:57:14.540893677Z" level=info msg="Start streaming server" May 13 23:57:14.541113 containerd[1517]: time="2025-05-13T23:57:14.540984267Z" level=info msg="containerd successfully booted in 0.036909s" May 13 23:57:14.541080 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:57:14.707239 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:57:14.725234 tar[1515]: linux-amd64/LICENSE May 13 23:57:14.725295 tar[1515]: linux-amd64/README.md May 13 23:57:14.732404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:57:14.736617 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:57:14.738005 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:57:14.749231 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:57:14.749499 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:57:14.752302 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:57:14.766015 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:57:14.768834 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:57:14.771001 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:57:14.772319 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:57:15.179995 systemd-networkd[1450]: eth0: Gained IPv6LL May 13 23:57:15.183584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:57:15.185609 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:57:15.199982 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:57:15.202932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:15.205513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:57:15.234781 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:57:15.236534 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:57:15.236838 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:57:15.239341 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:57:15.815416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:15.817115 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:57:15.818383 systemd[1]: Startup finished in 893ms (kernel) + 6.676s (initrd) + 4.454s (userspace) = 12.024s. May 13 23:57:15.819800 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:16.240823 kubelet[1602]: E0513 23:57:16.240619 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:16.245166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:16.245398 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:16.245807 systemd[1]: kubelet.service: Consumed 925ms CPU time, 237.2M memory peak. May 13 23:57:18.450343 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:57:18.451660 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:54770.service - OpenSSH per-connection server daemon (10.0.0.1:54770). May 13 23:57:18.515147 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 54770 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:18.517329 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:18.528599 systemd-logind[1499]: New session 1 of user core. May 13 23:57:18.530022 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:57:18.542941 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:57:18.557013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:57:18.570025 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:57:18.573281 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:57:18.575845 systemd-logind[1499]: New session c1 of user core. May 13 23:57:18.728830 systemd[1619]: Queued start job for default target default.target. May 13 23:57:18.747134 systemd[1619]: Created slice app.slice - User Application Slice. May 13 23:57:18.747164 systemd[1619]: Reached target paths.target - Paths. May 13 23:57:18.747209 systemd[1619]: Reached target timers.target - Timers. May 13 23:57:18.748875 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:57:18.760376 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:57:18.760521 systemd[1619]: Reached target sockets.target - Sockets. May 13 23:57:18.760565 systemd[1619]: Reached target basic.target - Basic System. May 13 23:57:18.760609 systemd[1619]: Reached target default.target - Main User Target. May 13 23:57:18.760648 systemd[1619]: Startup finished in 175ms. May 13 23:57:18.761082 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:57:18.762805 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:57:18.829538 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:54774.service - OpenSSH per-connection server daemon (10.0.0.1:54774). May 13 23:57:18.874069 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 54774 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:18.875798 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:18.880371 systemd-logind[1499]: New session 2 of user core. May 13 23:57:18.893886 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:57:18.947133 sshd[1632]: Connection closed by 10.0.0.1 port 54774 May 13 23:57:18.947592 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 13 23:57:18.955283 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:54774.service: Deactivated successfully. May 13 23:57:18.957236 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:57:18.958733 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. May 13 23:57:18.973990 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). May 13 23:57:18.974880 systemd-logind[1499]: Removed session 2. May 13 23:57:19.015849 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:19.017617 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:19.023148 systemd-logind[1499]: New session 3 of user core. May 13 23:57:19.046008 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:57:19.097386 sshd[1640]: Connection closed by 10.0.0.1 port 54778 May 13 23:57:19.097831 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 13 23:57:19.114466 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:54778.service: Deactivated successfully. May 13 23:57:19.116273 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:57:19.117959 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. May 13 23:57:19.132963 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:54780.service - OpenSSH per-connection server daemon (10.0.0.1:54780). May 13 23:57:19.133926 systemd-logind[1499]: Removed session 3. May 13 23:57:19.172800 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 54780 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:19.174266 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:19.178608 systemd-logind[1499]: New session 4 of user core. May 13 23:57:19.186845 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:57:19.242323 sshd[1648]: Connection closed by 10.0.0.1 port 54780 May 13 23:57:19.242883 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 13 23:57:19.254520 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:54780.service: Deactivated successfully. May 13 23:57:19.256655 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:57:19.258259 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. May 13 23:57:19.273175 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:54784.service - OpenSSH per-connection server daemon (10.0.0.1:54784). May 13 23:57:19.274288 systemd-logind[1499]: Removed session 4. May 13 23:57:19.311772 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 54784 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:19.313257 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:19.318038 systemd-logind[1499]: New session 5 of user core. May 13 23:57:19.327868 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:57:19.387802 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:57:19.388158 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:19.404034 sudo[1657]: pam_unix(sudo:session): session closed for user root May 13 23:57:19.405688 sshd[1656]: Connection closed by 10.0.0.1 port 54784 May 13 23:57:19.406130 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 13 23:57:19.425441 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:54784.service: Deactivated successfully. May 13 23:57:19.427978 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:57:19.430074 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. May 13 23:57:19.439035 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:54794.service - OpenSSH per-connection server daemon (10.0.0.1:54794). May 13 23:57:19.440114 systemd-logind[1499]: Removed session 5. May 13 23:57:19.481457 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 54794 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:19.483351 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:19.488212 systemd-logind[1499]: New session 6 of user core. May 13 23:57:19.501895 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:57:19.558608 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:57:19.559059 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:19.563579 sudo[1667]: pam_unix(sudo:session): session closed for user root May 13 23:57:19.570956 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:57:19.571288 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:19.596056 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:19.630537 augenrules[1689]: No rules May 13 23:57:19.632544 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:19.632874 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:19.634047 sudo[1666]: pam_unix(sudo:session): session closed for user root May 13 23:57:19.635531 sshd[1665]: Connection closed by 10.0.0.1 port 54794 May 13 23:57:19.635902 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 13 23:57:19.644520 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:54794.service: Deactivated successfully. May 13 23:57:19.646481 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:57:19.647887 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. May 13 23:57:19.662975 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:54808.service - OpenSSH per-connection server daemon (10.0.0.1:54808). May 13 23:57:19.664107 systemd-logind[1499]: Removed session 6. May 13 23:57:19.704403 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 54808 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:57:19.706020 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:19.710374 systemd-logind[1499]: New session 7 of user core. May 13 23:57:19.722841 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:57:19.776310 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:57:19.776665 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:20.064972 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:57:20.065138 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:57:20.330828 dockerd[1720]: time="2025-05-13T23:57:20.330638242Z" level=info msg="Starting up" May 13 23:57:20.801439 dockerd[1720]: time="2025-05-13T23:57:20.801289152Z" level=info msg="Loading containers: start." May 13 23:57:20.988759 kernel: Initializing XFRM netlink socket May 13 23:57:21.072154 systemd-networkd[1450]: docker0: Link UP May 13 23:57:21.115169 dockerd[1720]: time="2025-05-13T23:57:21.115128602Z" level=info msg="Loading containers: done." May 13 23:57:21.128793 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck318326843-merged.mount: Deactivated successfully. May 13 23:57:21.129524 dockerd[1720]: time="2025-05-13T23:57:21.129475114Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:57:21.129599 dockerd[1720]: time="2025-05-13T23:57:21.129579234Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 13 23:57:21.129735 dockerd[1720]: time="2025-05-13T23:57:21.129693705Z" level=info msg="Daemon has completed initialization" May 13 23:57:21.165963 dockerd[1720]: time="2025-05-13T23:57:21.165892746Z" level=info msg="API listen on /run/docker.sock" May 13 23:57:21.166157 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:57:21.816571 containerd[1517]: time="2025-05-13T23:57:21.816516400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:57:22.426407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248655916.mount: Deactivated successfully. May 13 23:57:23.280795 containerd[1517]: time="2025-05-13T23:57:23.280708294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:23.281428 containerd[1517]: time="2025-05-13T23:57:23.281389609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 23:57:23.282590 containerd[1517]: time="2025-05-13T23:57:23.282529713Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:23.285066 containerd[1517]: time="2025-05-13T23:57:23.285039173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:23.286191 containerd[1517]: time="2025-05-13T23:57:23.286140357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.46958554s" May 13 23:57:23.286191 containerd[1517]: time="2025-05-13T23:57:23.286179236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 23:57:23.287626 containerd[1517]: time="2025-05-13T23:57:23.287603209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:57:24.422370 containerd[1517]: time="2025-05-13T23:57:24.422277182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:24.423779 containerd[1517]: time="2025-05-13T23:57:24.423074397Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 23:57:24.424420 containerd[1517]: time="2025-05-13T23:57:24.424370925Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:24.427858 containerd[1517]: time="2025-05-13T23:57:24.427816973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:24.429032 containerd[1517]: time="2025-05-13T23:57:24.428971892Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.141334928s" May 13 23:57:24.429032 containerd[1517]: time="2025-05-13T23:57:24.429026949Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 23:57:24.429634 containerd[1517]: time="2025-05-13T23:57:24.429526362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:57:25.753212 containerd[1517]: time="2025-05-13T23:57:25.753132000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:25.753951 containerd[1517]: time="2025-05-13T23:57:25.753868791Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 23:57:25.755212 containerd[1517]: time="2025-05-13T23:57:25.755175372Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:25.757638 containerd[1517]: time="2025-05-13T23:57:25.757610735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:25.758638 containerd[1517]: time="2025-05-13T23:57:25.758605542Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.329043624s" May 13 23:57:25.758638 containerd[1517]: time="2025-05-13T23:57:25.758635314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 23:57:25.759128 containerd[1517]: time="2025-05-13T23:57:25.759076708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:57:26.316993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:57:26.323871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:26.481732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:26.487172 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:26.848091 kubelet[1987]: E0513 23:57:26.847993 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:26.855179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:26.855396 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:26.855810 systemd[1]: kubelet.service: Consumed 206ms CPU time, 95.4M memory peak. May 13 23:57:27.583812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681710422.mount: Deactivated successfully. May 13 23:57:28.683081 containerd[1517]: time="2025-05-13T23:57:28.683001096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.703077 containerd[1517]: time="2025-05-13T23:57:28.702960818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 23:57:28.732011 containerd[1517]: time="2025-05-13T23:57:28.731938535Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.757456 containerd[1517]: time="2025-05-13T23:57:28.757407802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.758354 containerd[1517]: time="2025-05-13T23:57:28.758306761Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.999189569s" May 13 23:57:28.758400 containerd[1517]: time="2025-05-13T23:57:28.758358211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 23:57:28.758894 containerd[1517]: time="2025-05-13T23:57:28.758871099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:57:30.502540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821605066.mount: Deactivated successfully. May 13 23:57:31.891043 containerd[1517]: time="2025-05-13T23:57:31.890980585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.891762 containerd[1517]: time="2025-05-13T23:57:31.891715375Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:57:31.892782 containerd[1517]: time="2025-05-13T23:57:31.892742981Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.895419 containerd[1517]: time="2025-05-13T23:57:31.895386495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.896487 containerd[1517]: time="2025-05-13T23:57:31.896449675Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.137549322s" May 13 23:57:31.896487 containerd[1517]: time="2025-05-13T23:57:31.896486750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:57:31.897000 containerd[1517]: time="2025-05-13T23:57:31.896972128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:57:32.400435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930084061.mount: Deactivated successfully. May 13 23:57:32.405109 containerd[1517]: time="2025-05-13T23:57:32.405064174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:32.405863 containerd[1517]: time="2025-05-13T23:57:32.405802184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:57:32.406953 containerd[1517]: time="2025-05-13T23:57:32.406924558Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:32.409020 containerd[1517]: time="2025-05-13T23:57:32.408987642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:32.409782 containerd[1517]: time="2025-05-13T23:57:32.409740911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.717079ms" May 13 23:57:32.409782 containerd[1517]: time="2025-05-13T23:57:32.409776934Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:57:32.410336 containerd[1517]: time="2025-05-13T23:57:32.410279755Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:57:32.919873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391701214.mount: Deactivated successfully. May 13 23:57:34.498937 containerd[1517]: time="2025-05-13T23:57:34.498858095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.499663 containerd[1517]: time="2025-05-13T23:57:34.499630447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 23:57:34.501001 containerd[1517]: time="2025-05-13T23:57:34.500936940Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.503986 containerd[1517]: time="2025-05-13T23:57:34.503909532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.505074 containerd[1517]: time="2025-05-13T23:57:34.505038028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.094724783s" May 13 23:57:34.505074 containerd[1517]: time="2025-05-13T23:57:34.505071421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 23:57:36.904752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:57:36.916955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:36.928601 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:57:36.928904 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:57:36.929249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:36.932834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:36.959778 systemd[1]: Reload requested from client PID 2139 ('systemctl') (unit session-7.scope)... May 13 23:57:36.959794 systemd[1]: Reloading... May 13 23:57:37.051764 zram_generator::config[2186]: No configuration found. May 13 23:57:37.227219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:37.330634 systemd[1]: Reloading finished in 370 ms. May 13 23:57:37.386501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:37.388940 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:57:37.389235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:37.389280 systemd[1]: kubelet.service: Consumed 140ms CPU time, 83.5M memory peak. May 13 23:57:37.391036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:37.534842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:37.538950 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:57:37.633604 kubelet[2233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:37.633604 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:57:37.633604 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:37.634056 kubelet[2233]: I0513 23:57:37.633660 2233 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:57:37.950867 kubelet[2233]: I0513 23:57:37.950820 2233 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:57:37.950867 kubelet[2233]: I0513 23:57:37.950849 2233 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:57:37.951139 kubelet[2233]: I0513 23:57:37.951112 2233 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:57:37.971941 kubelet[2233]: I0513 23:57:37.971912 2233 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:57:37.972208 kubelet[2233]: E0513 23:57:37.972171 2233 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:37.977799 kubelet[2233]: E0513 23:57:37.977761 2233 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 23:57:37.977799 kubelet[2233]: I0513 23:57:37.977786 2233 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 23:57:37.986153 kubelet[2233]: I0513 23:57:37.985933 2233 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:57:37.987222 kubelet[2233]: I0513 23:57:37.987194 2233 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:57:37.987555 kubelet[2233]: I0513 23:57:37.987508 2233 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:57:37.987763 kubelet[2233]: I0513 23:57:37.987551 2233 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:57:37.987853 kubelet[2233]: I0513 23:57:37.987765 2233 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:57:37.987853 kubelet[2233]: I0513 23:57:37.987777 2233 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:57:37.987936 kubelet[2233]: I0513 23:57:37.987919 2233 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:37.989660 kubelet[2233]: I0513 23:57:37.989633 2233 kubelet.go:408] "Attempting to sync node with API server" May 13 23:57:37.989660 kubelet[2233]: I0513 23:57:37.989660 2233 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:57:37.989740 kubelet[2233]: I0513 23:57:37.989702 2233 kubelet.go:314] "Adding apiserver pod source" May 13 23:57:37.989740 kubelet[2233]: I0513 23:57:37.989714 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:57:37.994786 kubelet[2233]: W0513 23:57:37.994731 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:37.994870 kubelet[2233]: E0513 23:57:37.994816 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:37.998172 kubelet[2233]: I0513 23:57:37.998091 2233 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:57:37.999033 kubelet[2233]: W0513 23:57:37.998997 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:37.999099 kubelet[2233]: E0513 23:57:37.999036 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:38.000069 kubelet[2233]: I0513 23:57:38.000051 2233 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:57:38.000656 kubelet[2233]: W0513 23:57:38.000622 2233 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:57:38.001532 kubelet[2233]: I0513 23:57:38.001366 2233 server.go:1269] "Started kubelet" May 13 23:57:38.002185 kubelet[2233]: I0513 23:57:38.002127 2233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:57:38.003229 kubelet[2233]: I0513 23:57:38.002584 2233 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:57:38.003229 kubelet[2233]: I0513 23:57:38.002659 2233 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:57:38.003229 kubelet[2233]: I0513 23:57:38.002916 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:57:38.004429 kubelet[2233]: I0513 23:57:38.003738 2233 server.go:460] "Adding debug handlers to kubelet server" May 13 23:57:38.004429 kubelet[2233]: I0513 23:57:38.004399 2233 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:57:38.005425 kubelet[2233]: I0513 23:57:38.004668 2233 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:57:38.005425 kubelet[2233]: I0513 23:57:38.004768 2233 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:57:38.005425 kubelet[2233]: I0513 23:57:38.004838 2233 reconciler.go:26] "Reconciler: start to sync state" May 13 23:57:38.005425 kubelet[2233]: W0513 23:57:38.005138 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:38.005425 kubelet[2233]: E0513 23:57:38.005182 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:38.005425 kubelet[2233]: E0513 23:57:38.005230 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.005624 kubelet[2233]: E0513 23:57:38.005477 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" May 13 23:57:38.006472 kubelet[2233]: I0513 23:57:38.006273 2233 factory.go:221] Registration of the systemd container factory successfully May 13 23:57:38.006472 kubelet[2233]: I0513 23:57:38.006387 2233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:57:38.007288 kubelet[2233]: E0513 23:57:38.007263 2233 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:57:38.007700 kubelet[2233]: I0513 23:57:38.007679 2233 factory.go:221] Registration of the containerd container factory successfully May 13 23:57:38.008851 kubelet[2233]: E0513 23:57:38.006275 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b8d2e9a6331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:57:38.001339185 +0000 UTC m=+0.454539928,LastTimestamp:2025-05-13 23:57:38.001339185 +0000 UTC m=+0.454539928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:57:38.025554 kubelet[2233]: I0513 23:57:38.025392 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:57:38.027113 kubelet[2233]: I0513 23:57:38.027092 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:57:38.027363 kubelet[2233]: I0513 23:57:38.027344 2233 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:57:38.027434 kubelet[2233]: I0513 23:57:38.027370 2233 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:57:38.027474 kubelet[2233]: E0513 23:57:38.027417 2233 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:57:38.029458 kubelet[2233]: W0513 23:57:38.029283 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:38.029458 kubelet[2233]: E0513 23:57:38.029350 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:38.030424 kubelet[2233]: I0513 23:57:38.030147 2233 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:57:38.030424 kubelet[2233]: I0513 23:57:38.030167 2233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:57:38.030424 kubelet[2233]: I0513 23:57:38.030185 2233 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:38.106318 kubelet[2233]: E0513 23:57:38.106273 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.128557 kubelet[2233]: E0513 23:57:38.128521 2233 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:57:38.206156 kubelet[2233]: E0513 23:57:38.206052 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" May 13 23:57:38.207163 kubelet[2233]: E0513 23:57:38.207134 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.307865 kubelet[2233]: E0513 23:57:38.307808 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.328988 kubelet[2233]: E0513 23:57:38.328951 2233 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:57:38.408598 kubelet[2233]: E0513 23:57:38.408561 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.509071 kubelet[2233]: E0513 23:57:38.508965 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.570241 kubelet[2233]: I0513 23:57:38.570200 2233 policy_none.go:49] "None policy: Start" May 13 23:57:38.570965 kubelet[2233]: I0513 23:57:38.570942 2233 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:57:38.570965 kubelet[2233]: I0513 23:57:38.570968 2233 state_mem.go:35] "Initializing new in-memory state store" May 13 23:57:38.586121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:57:38.603635 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:57:38.606816 kubelet[2233]: E0513 23:57:38.606766 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" May 13 23:57:38.607573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:57:38.609271 kubelet[2233]: E0513 23:57:38.609246 2233 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:38.614806 kubelet[2233]: I0513 23:57:38.614779 2233 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:57:38.615042 kubelet[2233]: I0513 23:57:38.615027 2233 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:57:38.615102 kubelet[2233]: I0513 23:57:38.615042 2233 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:57:38.615299 kubelet[2233]: I0513 23:57:38.615282 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:57:38.616034 kubelet[2233]: E0513 23:57:38.616014 2233 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:57:38.716797 kubelet[2233]: I0513 23:57:38.716775 2233 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:38.717141 kubelet[2233]: E0513 23:57:38.717094 2233 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 23:57:38.737127 systemd[1]: Created slice kubepods-burstable-poddbdf7a8fd3782c0064ba7a810486803b.slice - libcontainer container kubepods-burstable-poddbdf7a8fd3782c0064ba7a810486803b.slice. May 13 23:57:38.760101 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 23:57:38.764304 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 23:57:38.809104 kubelet[2233]: I0513 23:57:38.809074 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:38.809104 kubelet[2233]: I0513 23:57:38.809104 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:38.809203 kubelet[2233]: I0513 23:57:38.809120 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:38.809203 kubelet[2233]: I0513 23:57:38.809135 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:57:38.809203 kubelet[2233]: I0513 23:57:38.809148 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:38.809203 kubelet[2233]: I0513 23:57:38.809161 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:38.809203 kubelet[2233]: I0513 23:57:38.809173 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:38.809348 kubelet[2233]: I0513 23:57:38.809187 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:38.809348 kubelet[2233]: I0513 23:57:38.809200 2233 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:38.918571 kubelet[2233]: I0513 23:57:38.918537 2233 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:38.918893 kubelet[2233]: E0513 23:57:38.918865 2233 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 23:57:39.003893 kubelet[2233]: W0513 23:57:39.003839 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:39.003954 kubelet[2233]: E0513 23:57:39.003895 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:39.029779 kubelet[2233]: W0513 23:57:39.029645 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:39.029779 kubelet[2233]: E0513 23:57:39.029688 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:39.057824 kubelet[2233]: E0513 23:57:39.057804 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:39.058309 containerd[1517]: time="2025-05-13T23:57:39.058274056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbdf7a8fd3782c0064ba7a810486803b,Namespace:kube-system,Attempt:0,}" May 13 23:57:39.063527 kubelet[2233]: E0513 23:57:39.063489 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:39.063842 containerd[1517]: time="2025-05-13T23:57:39.063815139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 23:57:39.066316 kubelet[2233]: E0513 23:57:39.066290 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:39.066584 containerd[1517]: time="2025-05-13T23:57:39.066550312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 23:57:39.319979 kubelet[2233]: I0513 23:57:39.319947 2233 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:39.320312 kubelet[2233]: E0513 23:57:39.320267 2233 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 23:57:39.402776 kubelet[2233]: W0513 23:57:39.402674 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:39.402845 kubelet[2233]: E0513 23:57:39.402776 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:39.407432 kubelet[2233]: E0513 23:57:39.407394 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" May 13 23:57:39.454030 kubelet[2233]: W0513 23:57:39.453972 2233 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 23:57:39.454092 kubelet[2233]: E0513 23:57:39.454040 2233 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:40.054530 kubelet[2233]: E0513 23:57:40.054476 2233 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:40.122290 kubelet[2233]: I0513 23:57:40.122265 2233 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:40.122629 kubelet[2233]: E0513 23:57:40.122579 2233 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 23:57:40.313359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765881110.mount: Deactivated successfully. May 13 23:57:40.321368 containerd[1517]: time="2025-05-13T23:57:40.321320786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:40.324151 containerd[1517]: time="2025-05-13T23:57:40.324098982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 23:57:40.325171 containerd[1517]: time="2025-05-13T23:57:40.325145244Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:40.326977 containerd[1517]: time="2025-05-13T23:57:40.326939755Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:40.327797 containerd[1517]: time="2025-05-13T23:57:40.327757110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:57:40.328839 containerd[1517]: time="2025-05-13T23:57:40.328789573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:40.329737 containerd[1517]: time="2025-05-13T23:57:40.329689001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:57:40.331848 containerd[1517]: time="2025-05-13T23:57:40.331813277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:40.333795 containerd[1517]: time="2025-05-13T23:57:40.333755852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.269865281s" May 13 23:57:40.336222 containerd[1517]: time="2025-05-13T23:57:40.336126888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.277775517s" May 13 23:57:40.336452 containerd[1517]: time="2025-05-13T23:57:40.336421243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.269807384s" May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.516125233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517559447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517575259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517459329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517524850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517535634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.517846 containerd[1517]: time="2025-05-13T23:57:40.517613351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.519112 containerd[1517]: time="2025-05-13T23:57:40.519009464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.520537 containerd[1517]: time="2025-05-13T23:57:40.519737678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:40.520537 containerd[1517]: time="2025-05-13T23:57:40.519790039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:40.520537 containerd[1517]: time="2025-05-13T23:57:40.519805040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.520537 containerd[1517]: time="2025-05-13T23:57:40.519892571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:40.555898 systemd[1]: Started cri-containerd-45f42c651623be1b61ece709565d451bb72b3b57b93d53eeabdf23929f541490.scope - libcontainer container 45f42c651623be1b61ece709565d451bb72b3b57b93d53eeabdf23929f541490. May 13 23:57:40.560997 systemd[1]: Started cri-containerd-49af19bb712aa33ae79a5055cfccf74cc0fa7e11b08bf8e97ac8388d074f5f12.scope - libcontainer container 49af19bb712aa33ae79a5055cfccf74cc0fa7e11b08bf8e97ac8388d074f5f12. May 13 23:57:40.563175 systemd[1]: Started cri-containerd-f7586c4d3a748b78b11c4067448709ff2a757ec0f65e34e4d3bb0041df61b503.scope - libcontainer container f7586c4d3a748b78b11c4067448709ff2a757ec0f65e34e4d3bb0041df61b503. May 13 23:57:40.612299 containerd[1517]: time="2025-05-13T23:57:40.612211238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f42c651623be1b61ece709565d451bb72b3b57b93d53eeabdf23929f541490\"" May 13 23:57:40.613684 kubelet[2233]: E0513 23:57:40.613495 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:40.614171 containerd[1517]: time="2025-05-13T23:57:40.614134706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbdf7a8fd3782c0064ba7a810486803b,Namespace:kube-system,Attempt:0,} returns sandbox id \"49af19bb712aa33ae79a5055cfccf74cc0fa7e11b08bf8e97ac8388d074f5f12\"" May 13 23:57:40.615005 kubelet[2233]: E0513 23:57:40.614953 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:40.615799 containerd[1517]: time="2025-05-13T23:57:40.615647440Z" level=info msg="CreateContainer within sandbox \"45f42c651623be1b61ece709565d451bb72b3b57b93d53eeabdf23929f541490\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:57:40.617051 containerd[1517]: time="2025-05-13T23:57:40.617026730Z" level=info msg="CreateContainer within sandbox \"49af19bb712aa33ae79a5055cfccf74cc0fa7e11b08bf8e97ac8388d074f5f12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:57:40.622283 containerd[1517]: time="2025-05-13T23:57:40.622218550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7586c4d3a748b78b11c4067448709ff2a757ec0f65e34e4d3bb0041df61b503\"" May 13 23:57:40.622863 kubelet[2233]: E0513 23:57:40.622827 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:40.624911 containerd[1517]: time="2025-05-13T23:57:40.624888598Z" level=info msg="CreateContainer within sandbox \"f7586c4d3a748b78b11c4067448709ff2a757ec0f65e34e4d3bb0041df61b503\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:57:40.640177 containerd[1517]: time="2025-05-13T23:57:40.640109431Z" level=info msg="CreateContainer within sandbox \"45f42c651623be1b61ece709565d451bb72b3b57b93d53eeabdf23929f541490\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"829c1cc75c987af5806194f5a5ae4e2a6ad5b130e2b3969dd3989f0be6e7c994\"" May 13 23:57:40.640575 containerd[1517]: time="2025-05-13T23:57:40.640550288Z" level=info msg="StartContainer for \"829c1cc75c987af5806194f5a5ae4e2a6ad5b130e2b3969dd3989f0be6e7c994\"" May 13 23:57:40.646231 containerd[1517]: time="2025-05-13T23:57:40.646184948Z" level=info msg="CreateContainer within sandbox \"49af19bb712aa33ae79a5055cfccf74cc0fa7e11b08bf8e97ac8388d074f5f12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c169c481a61727538b0871a3bd169d801f46790f6263486d7fc0085b2a818dbc\"" May 13 23:57:40.646830 containerd[1517]: time="2025-05-13T23:57:40.646544052Z" level=info msg="StartContainer for \"c169c481a61727538b0871a3bd169d801f46790f6263486d7fc0085b2a818dbc\"" May 13 23:57:40.650229 containerd[1517]: time="2025-05-13T23:57:40.650188181Z" level=info msg="CreateContainer within sandbox \"f7586c4d3a748b78b11c4067448709ff2a757ec0f65e34e4d3bb0041df61b503\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"deaf932ebf91cae2063d9955f20a1d5929412337586613486a2b072e6e0d1e04\"" May 13 23:57:40.650999 containerd[1517]: time="2025-05-13T23:57:40.650910216Z" level=info msg="StartContainer for \"deaf932ebf91cae2063d9955f20a1d5929412337586613486a2b072e6e0d1e04\"" May 13 23:57:40.671215 systemd[1]: Started cri-containerd-829c1cc75c987af5806194f5a5ae4e2a6ad5b130e2b3969dd3989f0be6e7c994.scope - libcontainer container 829c1cc75c987af5806194f5a5ae4e2a6ad5b130e2b3969dd3989f0be6e7c994. May 13 23:57:40.674140 systemd[1]: Started cri-containerd-c169c481a61727538b0871a3bd169d801f46790f6263486d7fc0085b2a818dbc.scope - libcontainer container c169c481a61727538b0871a3bd169d801f46790f6263486d7fc0085b2a818dbc. May 13 23:57:40.691860 systemd[1]: Started cri-containerd-deaf932ebf91cae2063d9955f20a1d5929412337586613486a2b072e6e0d1e04.scope - libcontainer container deaf932ebf91cae2063d9955f20a1d5929412337586613486a2b072e6e0d1e04. May 13 23:57:40.730448 containerd[1517]: time="2025-05-13T23:57:40.729571082Z" level=info msg="StartContainer for \"c169c481a61727538b0871a3bd169d801f46790f6263486d7fc0085b2a818dbc\" returns successfully" May 13 23:57:40.739530 containerd[1517]: time="2025-05-13T23:57:40.739389174Z" level=info msg="StartContainer for \"829c1cc75c987af5806194f5a5ae4e2a6ad5b130e2b3969dd3989f0be6e7c994\" returns successfully" May 13 23:57:40.748399 containerd[1517]: time="2025-05-13T23:57:40.748345459Z" level=info msg="StartContainer for \"deaf932ebf91cae2063d9955f20a1d5929412337586613486a2b072e6e0d1e04\" returns successfully" May 13 23:57:41.035931 kubelet[2233]: E0513 23:57:41.035713 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:41.038637 kubelet[2233]: E0513 23:57:41.038582 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:41.040944 kubelet[2233]: E0513 23:57:41.040882 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:41.724294 kubelet[2233]: I0513 23:57:41.724241 2233 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:42.016561 kubelet[2233]: E0513 23:57:42.016429 2233 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:57:42.048056 kubelet[2233]: E0513 23:57:42.048006 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:42.111840 kubelet[2233]: I0513 23:57:42.111778 2233 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:57:42.111840 kubelet[2233]: E0513 23:57:42.111830 2233 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:57:42.996803 kubelet[2233]: I0513 23:57:42.996750 2233 apiserver.go:52] "Watching apiserver" May 13 23:57:43.005172 kubelet[2233]: I0513 23:57:43.005119 2233 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:57:43.052040 kubelet[2233]: E0513 23:57:43.051988 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:44.043240 systemd[1]: Reload requested from client PID 2514 ('systemctl') (unit session-7.scope)... May 13 23:57:44.043263 systemd[1]: Reloading... May 13 23:57:44.044354 kubelet[2233]: E0513 23:57:44.044323 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:44.149765 zram_generator::config[2564]: No configuration found. May 13 23:57:44.257787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:44.375704 systemd[1]: Reloading finished in 331 ms. May 13 23:57:44.401237 kubelet[2233]: I0513 23:57:44.401134 2233 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:57:44.401273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:44.411369 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:57:44.411694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:44.411789 systemd[1]: kubelet.service: Consumed 956ms CPU time, 120.9M memory peak. May 13 23:57:44.422944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:44.577429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:44.582761 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:57:44.622310 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:44.622310 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:57:44.622310 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:44.622759 kubelet[2603]: I0513 23:57:44.622358 2603 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:57:44.630437 kubelet[2603]: I0513 23:57:44.630310 2603 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:57:44.630437 kubelet[2603]: I0513 23:57:44.630368 2603 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:57:44.630668 kubelet[2603]: I0513 23:57:44.630645 2603 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:57:44.632201 kubelet[2603]: I0513 23:57:44.632176 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:57:44.634397 kubelet[2603]: I0513 23:57:44.634130 2603 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:57:44.637668 kubelet[2603]: E0513 23:57:44.637638 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 23:57:44.637668 kubelet[2603]: I0513 23:57:44.637670 2603 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 23:57:44.645163 kubelet[2603]: I0513 23:57:44.645123 2603 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:57:44.645295 kubelet[2603]: I0513 23:57:44.645268 2603 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:57:44.645637 kubelet[2603]: I0513 23:57:44.645397 2603 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:57:44.645730 kubelet[2603]: I0513 23:57:44.645429 2603 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:57:44.645808 kubelet[2603]: I0513 23:57:44.645761 2603 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:57:44.645808 kubelet[2603]: I0513 23:57:44.645796 2603 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:57:44.645951 kubelet[2603]: I0513 23:57:44.645857 2603 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:44.647026 kubelet[2603]: I0513 23:57:44.646308 2603 kubelet.go:408] "Attempting to sync node with API server" May 13 23:57:44.647026 kubelet[2603]: I0513 23:57:44.646341 2603 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:57:44.647026 kubelet[2603]: I0513 23:57:44.646387 2603 kubelet.go:314] "Adding apiserver pod source" May 13 23:57:44.647026 kubelet[2603]: I0513 23:57:44.646406 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:57:44.647556 kubelet[2603]: I0513 23:57:44.647359 2603 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:57:44.647958 kubelet[2603]: I0513 23:57:44.647940 2603 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:57:44.648593 kubelet[2603]: I0513 23:57:44.648520 2603 server.go:1269] "Started kubelet" May 13 23:57:44.650082 kubelet[2603]: I0513 23:57:44.649776 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:57:44.650218 kubelet[2603]: I0513 23:57:44.650163 2603 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:57:44.650218 kubelet[2603]: I0513 23:57:44.650184 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:57:44.650348 kubelet[2603]: I0513 23:57:44.650217 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:57:44.651829 kubelet[2603]: I0513 23:57:44.651110 2603 server.go:460] "Adding debug handlers to kubelet server" May 13 23:57:44.652767 kubelet[2603]: I0513 23:57:44.652054 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:57:44.656783 kubelet[2603]: I0513 23:57:44.656767 2603 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:57:44.656929 kubelet[2603]: I0513 23:57:44.656915 2603 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:57:44.657109 kubelet[2603]: I0513 23:57:44.657097 2603 reconciler.go:26] "Reconciler: start to sync state" May 13 23:57:44.657510 kubelet[2603]: E0513 23:57:44.657493 2603 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:57:44.658596 kubelet[2603]: I0513 23:57:44.658565 2603 factory.go:221] Registration of the systemd container factory successfully May 13 23:57:44.660351 kubelet[2603]: I0513 23:57:44.658690 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:57:44.662650 kubelet[2603]: E0513 23:57:44.662618 2603 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:57:44.662829 kubelet[2603]: I0513 23:57:44.662795 2603 factory.go:221] Registration of the containerd container factory successfully May 13 23:57:44.667084 kubelet[2603]: I0513 23:57:44.667038 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:57:44.668705 kubelet[2603]: I0513 23:57:44.668677 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:57:44.668778 kubelet[2603]: I0513 23:57:44.668732 2603 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:57:44.668778 kubelet[2603]: I0513 23:57:44.668753 2603 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:57:44.668834 kubelet[2603]: E0513 23:57:44.668797 2603 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:57:44.698456 kubelet[2603]: I0513 23:57:44.698431 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:57:44.698548 kubelet[2603]: I0513 23:57:44.698537 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:57:44.698618 kubelet[2603]: I0513 23:57:44.698609 2603 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:44.698848 kubelet[2603]: I0513 23:57:44.698833 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:57:44.698930 kubelet[2603]: I0513 23:57:44.698907 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:57:44.698981 kubelet[2603]: I0513 23:57:44.698972 2603 policy_none.go:49] "None policy: Start" May 13 23:57:44.699577 kubelet[2603]: I0513 23:57:44.699548 2603 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:57:44.699625 kubelet[2603]: I0513 23:57:44.699581 2603 state_mem.go:35] "Initializing new in-memory state store" May 13 23:57:44.699786 kubelet[2603]: I0513 23:57:44.699770 2603 state_mem.go:75] "Updated machine memory state" May 13 23:57:44.704150 kubelet[2603]: I0513 23:57:44.703968 2603 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:57:44.704150 kubelet[2603]: I0513 23:57:44.704143 2603 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:57:44.704248 kubelet[2603]: I0513 23:57:44.704155 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:57:44.704387 kubelet[2603]: I0513 23:57:44.704364 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:57:44.808674 kubelet[2603]: I0513 23:57:44.808626 2603 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:57:44.858068 kubelet[2603]: I0513 23:57:44.858027 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:44.858167 kubelet[2603]: I0513 23:57:44.858087 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:44.858167 kubelet[2603]: I0513 23:57:44.858110 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:57:44.858167 kubelet[2603]: I0513 23:57:44.858128 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:44.858247 kubelet[2603]: I0513 23:57:44.858218 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:44.858275 kubelet[2603]: I0513 23:57:44.858256 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:44.858302 kubelet[2603]: I0513 23:57:44.858279 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:44.858302 kubelet[2603]: I0513 23:57:44.858296 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:57:44.858360 kubelet[2603]: I0513 23:57:44.858313 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbdf7a8fd3782c0064ba7a810486803b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbdf7a8fd3782c0064ba7a810486803b\") " pod="kube-system/kube-apiserver-localhost" May 13 23:57:44.874387 kubelet[2603]: E0513 23:57:44.874340 2603 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:57:44.876051 kubelet[2603]: I0513 23:57:44.876028 2603 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 23:57:44.876128 kubelet[2603]: I0513 23:57:44.876112 2603 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:57:45.001622 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:57:45.002027 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:57:45.111345 kubelet[2603]: E0513 23:57:45.111223 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.111345 kubelet[2603]: E0513 23:57:45.111262 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.175620 kubelet[2603]: E0513 23:57:45.175587 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.534693 sudo[2640]: pam_unix(sudo:session): session closed for user root May 13 23:57:45.647714 kubelet[2603]: I0513 23:57:45.647665 2603 apiserver.go:52] "Watching apiserver" May 13 23:57:45.658067 kubelet[2603]: I0513 23:57:45.658023 2603 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:57:45.683703 kubelet[2603]: E0513 23:57:45.683665 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.683860 kubelet[2603]: E0513 23:57:45.683757 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.695541 kubelet[2603]: E0513 23:57:45.695494 2603 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:57:45.695681 kubelet[2603]: E0513 23:57:45.695632 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:45.750963 kubelet[2603]: I0513 23:57:45.750899 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7508792 podStartE2EDuration="2.7508792s" podCreationTimestamp="2025-05-13 23:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:45.743993373 +0000 UTC m=+1.157155732" watchObservedRunningTime="2025-05-13 23:57:45.7508792 +0000 UTC m=+1.164041559" May 13 23:57:45.757646 kubelet[2603]: I0513 23:57:45.757585 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.757569244 podStartE2EDuration="1.757569244s" podCreationTimestamp="2025-05-13 23:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:45.757406368 +0000 UTC m=+1.170568727" watchObservedRunningTime="2025-05-13 23:57:45.757569244 +0000 UTC m=+1.170731603" May 13 23:57:45.757778 kubelet[2603]: I0513 23:57:45.757713 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7577103090000001 podStartE2EDuration="1.757710309s" podCreationTimestamp="2025-05-13 23:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:45.751228568 +0000 UTC m=+1.164390926" watchObservedRunningTime="2025-05-13 23:57:45.757710309 +0000 UTC m=+1.170872668" May 13 23:57:46.684282 kubelet[2603]: E0513 23:57:46.684229 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:46.684790 kubelet[2603]: E0513 23:57:46.684762 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:46.775454 sudo[1701]: pam_unix(sudo:session): session closed for user root May 13 23:57:46.776957 sshd[1700]: Connection closed by 10.0.0.1 port 54808 May 13 23:57:46.777450 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 13 23:57:46.781707 kubelet[2603]: E0513 23:57:46.781681 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:46.781924 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:54808.service: Deactivated successfully. May 13 23:57:46.784537 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:57:46.784839 systemd[1]: session-7.scope: Consumed 4.568s CPU time, 255.9M memory peak. May 13 23:57:46.786499 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. May 13 23:57:46.787418 systemd-logind[1499]: Removed session 7. May 13 23:57:49.142179 kubelet[2603]: E0513 23:57:49.142121 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:50.785182 kubelet[2603]: I0513 23:57:50.785136 2603 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:57:50.785678 kubelet[2603]: I0513 23:57:50.785581 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:57:50.785716 containerd[1517]: time="2025-05-13T23:57:50.785419319Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:57:51.629929 systemd[1]: Created slice kubepods-besteffort-pod0fdd054e_0c63_403b_86c2_945f24ed3c33.slice - libcontainer container kubepods-besteffort-pod0fdd054e_0c63_403b_86c2_945f24ed3c33.slice. May 13 23:57:51.655508 systemd[1]: Created slice kubepods-burstable-pod7896f9a0_04dc_4bdf_9417_b5b711ff4829.slice - libcontainer container kubepods-burstable-pod7896f9a0_04dc_4bdf_9417_b5b711ff4829.slice. May 13 23:57:51.703469 kubelet[2603]: I0513 23:57:51.703428 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s9k\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-kube-api-access-w5s9k\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703469 kubelet[2603]: I0513 23:57:51.703471 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-run\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703492 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hostproc\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703515 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-lib-modules\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703533 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-net\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703559 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fdd054e-0c63-403b-86c2-945f24ed3c33-lib-modules\") pod \"kube-proxy-znf8p\" (UID: \"0fdd054e-0c63-403b-86c2-945f24ed3c33\") " pod="kube-system/kube-proxy-znf8p" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703579 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-bpf-maps\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703689 kubelet[2603]: I0513 23:57:51.703610 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-etc-cni-netd\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703946 kubelet[2603]: I0513 23:57:51.703630 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-config-path\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703946 kubelet[2603]: I0513 23:57:51.703650 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-xtables-lock\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703946 kubelet[2603]: I0513 23:57:51.703673 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-kernel\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703946 kubelet[2603]: I0513 23:57:51.703715 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-cgroup\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.703946 kubelet[2603]: I0513 23:57:51.703765 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fdd054e-0c63-403b-86c2-945f24ed3c33-kube-proxy\") pod \"kube-proxy-znf8p\" (UID: \"0fdd054e-0c63-403b-86c2-945f24ed3c33\") " pod="kube-system/kube-proxy-znf8p" May 13 23:57:51.704109 kubelet[2603]: I0513 23:57:51.703795 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fdd054e-0c63-403b-86c2-945f24ed3c33-xtables-lock\") pod \"kube-proxy-znf8p\" (UID: \"0fdd054e-0c63-403b-86c2-945f24ed3c33\") " pod="kube-system/kube-proxy-znf8p" May 13 23:57:51.704109 kubelet[2603]: I0513 23:57:51.703900 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwlt\" (UniqueName: \"kubernetes.io/projected/0fdd054e-0c63-403b-86c2-945f24ed3c33-kube-api-access-htwlt\") pod \"kube-proxy-znf8p\" (UID: \"0fdd054e-0c63-403b-86c2-945f24ed3c33\") " pod="kube-system/kube-proxy-znf8p" May 13 23:57:51.704109 kubelet[2603]: I0513 23:57:51.703922 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cni-path\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.704109 kubelet[2603]: I0513 23:57:51.703940 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7896f9a0-04dc-4bdf-9417-b5b711ff4829-clustermesh-secrets\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.704109 kubelet[2603]: I0513 23:57:51.703957 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hubble-tls\") pod \"cilium-hs8x4\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " pod="kube-system/cilium-hs8x4" May 13 23:57:51.832484 systemd[1]: Created slice kubepods-besteffort-podac50abef_6ad4_40fb_9ac8_24f745cd4755.slice - libcontainer container kubepods-besteffort-podac50abef_6ad4_40fb_9ac8_24f745cd4755.slice. May 13 23:57:51.906813 kubelet[2603]: I0513 23:57:51.906634 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfdx4\" (UniqueName: \"kubernetes.io/projected/ac50abef-6ad4-40fb-9ac8-24f745cd4755-kube-api-access-gfdx4\") pod \"cilium-operator-5d85765b45-27h4n\" (UID: \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\") " pod="kube-system/cilium-operator-5d85765b45-27h4n" May 13 23:57:51.906813 kubelet[2603]: I0513 23:57:51.906682 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac50abef-6ad4-40fb-9ac8-24f745cd4755-cilium-config-path\") pod \"cilium-operator-5d85765b45-27h4n\" (UID: \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\") " pod="kube-system/cilium-operator-5d85765b45-27h4n" May 13 23:57:51.954343 kubelet[2603]: E0513 23:57:51.954321 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:51.954805 containerd[1517]: time="2025-05-13T23:57:51.954758642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znf8p,Uid:0fdd054e-0c63-403b-86c2-945f24ed3c33,Namespace:kube-system,Attempt:0,}" May 13 23:57:51.958796 kubelet[2603]: E0513 23:57:51.958765 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:51.959068 containerd[1517]: time="2025-05-13T23:57:51.959038646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs8x4,Uid:7896f9a0-04dc-4bdf-9417-b5b711ff4829,Namespace:kube-system,Attempt:0,}" May 13 23:57:51.986387 containerd[1517]: time="2025-05-13T23:57:51.984575780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:51.986387 containerd[1517]: time="2025-05-13T23:57:51.984630193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:51.986387 containerd[1517]: time="2025-05-13T23:57:51.984643854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:51.986387 containerd[1517]: time="2025-05-13T23:57:51.984738337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:51.991174 containerd[1517]: time="2025-05-13T23:57:51.991077450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:51.991174 containerd[1517]: time="2025-05-13T23:57:51.991125373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:51.991174 containerd[1517]: time="2025-05-13T23:57:51.991135128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:51.991385 containerd[1517]: time="2025-05-13T23:57:51.991208039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:52.010177 systemd[1]: Started cri-containerd-79436ce20dc612c5b051ec033a761f4b60b896e0bfd332c258f1d5611a559241.scope - libcontainer container 79436ce20dc612c5b051ec033a761f4b60b896e0bfd332c258f1d5611a559241. May 13 23:57:52.016837 systemd[1]: Started cri-containerd-ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a.scope - libcontainer container ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a. May 13 23:57:52.043992 containerd[1517]: time="2025-05-13T23:57:52.043929794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znf8p,Uid:0fdd054e-0c63-403b-86c2-945f24ed3c33,Namespace:kube-system,Attempt:0,} returns sandbox id \"79436ce20dc612c5b051ec033a761f4b60b896e0bfd332c258f1d5611a559241\"" May 13 23:57:52.045002 kubelet[2603]: E0513 23:57:52.044921 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:52.048660 containerd[1517]: time="2025-05-13T23:57:52.048608656Z" level=info msg="CreateContainer within sandbox \"79436ce20dc612c5b051ec033a761f4b60b896e0bfd332c258f1d5611a559241\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:57:52.051857 containerd[1517]: time="2025-05-13T23:57:52.051825665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs8x4,Uid:7896f9a0-04dc-4bdf-9417-b5b711ff4829,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\"" May 13 23:57:52.052853 kubelet[2603]: E0513 23:57:52.052827 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:52.054014 containerd[1517]: time="2025-05-13T23:57:52.053982466Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:57:52.073967 containerd[1517]: time="2025-05-13T23:57:52.073926946Z" level=info msg="CreateContainer within sandbox \"79436ce20dc612c5b051ec033a761f4b60b896e0bfd332c258f1d5611a559241\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b015aab3bf646f3db09eb99a066888d9463a183bd2e5d88af80e22c74902d29\"" May 13 23:57:52.074580 containerd[1517]: time="2025-05-13T23:57:52.074484902Z" level=info msg="StartContainer for \"1b015aab3bf646f3db09eb99a066888d9463a183bd2e5d88af80e22c74902d29\"" May 13 23:57:52.112956 systemd[1]: Started cri-containerd-1b015aab3bf646f3db09eb99a066888d9463a183bd2e5d88af80e22c74902d29.scope - libcontainer container 1b015aab3bf646f3db09eb99a066888d9463a183bd2e5d88af80e22c74902d29. May 13 23:57:52.142341 kubelet[2603]: E0513 23:57:52.142293 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:52.142797 containerd[1517]: time="2025-05-13T23:57:52.142750415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-27h4n,Uid:ac50abef-6ad4-40fb-9ac8-24f745cd4755,Namespace:kube-system,Attempt:0,}" May 13 23:57:52.150922 containerd[1517]: time="2025-05-13T23:57:52.150837111Z" level=info msg="StartContainer for \"1b015aab3bf646f3db09eb99a066888d9463a183bd2e5d88af80e22c74902d29\" returns successfully" May 13 23:57:52.175184 containerd[1517]: time="2025-05-13T23:57:52.174989891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:57:52.175184 containerd[1517]: time="2025-05-13T23:57:52.175038415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:57:52.175184 containerd[1517]: time="2025-05-13T23:57:52.175048330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:52.175184 containerd[1517]: time="2025-05-13T23:57:52.175128242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:57:52.197918 systemd[1]: Started cri-containerd-ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f.scope - libcontainer container ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f. May 13 23:57:52.236610 containerd[1517]: time="2025-05-13T23:57:52.236545712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-27h4n,Uid:ac50abef-6ad4-40fb-9ac8-24f745cd4755,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\"" May 13 23:57:52.237671 kubelet[2603]: E0513 23:57:52.237394 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:52.696069 kubelet[2603]: E0513 23:57:52.696026 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:56.570929 kubelet[2603]: E0513 23:57:56.570892 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:56.578380 kubelet[2603]: I0513 23:57:56.578320 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-znf8p" podStartSLOduration=5.5783025649999995 podStartE2EDuration="5.578302565s" podCreationTimestamp="2025-05-13 23:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:52.704412715 +0000 UTC m=+8.117575074" watchObservedRunningTime="2025-05-13 23:57:56.578302565 +0000 UTC m=+11.991464924" May 13 23:57:56.785378 kubelet[2603]: E0513 23:57:56.785340 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:59.032168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130494167.mount: Deactivated successfully. May 13 23:57:59.404565 kubelet[2603]: E0513 23:57:59.404472 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:57:59.795107 update_engine[1505]: I20250513 23:57:59.794924 1505 update_attempter.cc:509] Updating boot flags... May 13 23:57:59.842435 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3007) May 13 23:57:59.911759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3009) May 13 23:57:59.961750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3009) May 13 23:58:01.952580 containerd[1517]: time="2025-05-13T23:58:01.952499717Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:01.953476 containerd[1517]: time="2025-05-13T23:58:01.953207064Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 23:58:01.954485 containerd[1517]: time="2025-05-13T23:58:01.954434462Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:01.955979 containerd[1517]: time="2025-05-13T23:58:01.955948104Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.901926637s" May 13 23:58:01.956026 containerd[1517]: time="2025-05-13T23:58:01.955977872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 23:58:01.964608 containerd[1517]: time="2025-05-13T23:58:01.964572514Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:58:01.979469 containerd[1517]: time="2025-05-13T23:58:01.979435099Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:58:01.993395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578100647.mount: Deactivated successfully. May 13 23:58:01.995834 containerd[1517]: time="2025-05-13T23:58:01.995791666Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\"" May 13 23:58:01.998382 containerd[1517]: time="2025-05-13T23:58:01.998354186Z" level=info msg="StartContainer for \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\"" May 13 23:58:02.028972 systemd[1]: Started cri-containerd-143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d.scope - libcontainer container 143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d. May 13 23:58:02.068896 systemd[1]: cri-containerd-143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d.scope: Deactivated successfully. May 13 23:58:02.202047 containerd[1517]: time="2025-05-13T23:58:02.201981646Z" level=info msg="StartContainer for \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\" returns successfully" May 13 23:58:02.526527 containerd[1517]: time="2025-05-13T23:58:02.526461918Z" level=info msg="shim disconnected" id=143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d namespace=k8s.io May 13 23:58:02.526527 containerd[1517]: time="2025-05-13T23:58:02.526508694Z" level=warning msg="cleaning up after shim disconnected" id=143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d namespace=k8s.io May 13 23:58:02.526527 containerd[1517]: time="2025-05-13T23:58:02.526516878Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:58:02.714079 kubelet[2603]: E0513 23:58:02.714037 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:02.716839 containerd[1517]: time="2025-05-13T23:58:02.716693902Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:58:02.733883 containerd[1517]: time="2025-05-13T23:58:02.733827373Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\"" May 13 23:58:02.734552 containerd[1517]: time="2025-05-13T23:58:02.734382727Z" level=info msg="StartContainer for \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\"" May 13 23:58:02.761854 systemd[1]: Started cri-containerd-1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07.scope - libcontainer container 1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07. May 13 23:58:02.793103 containerd[1517]: time="2025-05-13T23:58:02.792932259Z" level=info msg="StartContainer for \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\" returns successfully" May 13 23:58:02.806542 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:58:02.807138 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:02.807343 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:02.814110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:02.814329 systemd[1]: cri-containerd-1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07.scope: Deactivated successfully. May 13 23:58:02.829323 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:02.834284 containerd[1517]: time="2025-05-13T23:58:02.834234473Z" level=info msg="shim disconnected" id=1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07 namespace=k8s.io May 13 23:58:02.834380 containerd[1517]: time="2025-05-13T23:58:02.834284605Z" level=warning msg="cleaning up after shim disconnected" id=1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07 namespace=k8s.io May 13 23:58:02.834380 containerd[1517]: time="2025-05-13T23:58:02.834293258Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:58:02.991025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d-rootfs.mount: Deactivated successfully. May 13 23:58:03.665206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779886406.mount: Deactivated successfully. May 13 23:58:03.717577 kubelet[2603]: E0513 23:58:03.717536 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:03.721204 containerd[1517]: time="2025-05-13T23:58:03.721160453Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:58:03.744709 containerd[1517]: time="2025-05-13T23:58:03.744493718Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\"" May 13 23:58:03.745228 containerd[1517]: time="2025-05-13T23:58:03.745205702Z" level=info msg="StartContainer for \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\"" May 13 23:58:03.777932 systemd[1]: Started cri-containerd-c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f.scope - libcontainer container c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f. May 13 23:58:03.815253 containerd[1517]: time="2025-05-13T23:58:03.815196181Z" level=info msg="StartContainer for \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\" returns successfully" May 13 23:58:03.817306 systemd[1]: cri-containerd-c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f.scope: Deactivated successfully. May 13 23:58:03.867338 containerd[1517]: time="2025-05-13T23:58:03.867221634Z" level=info msg="shim disconnected" id=c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f namespace=k8s.io May 13 23:58:03.867338 containerd[1517]: time="2025-05-13T23:58:03.867325604Z" level=warning msg="cleaning up after shim disconnected" id=c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f namespace=k8s.io May 13 23:58:03.867338 containerd[1517]: time="2025-05-13T23:58:03.867338045Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:58:04.401156 containerd[1517]: time="2025-05-13T23:58:04.401101122Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:04.401853 containerd[1517]: time="2025-05-13T23:58:04.401790716Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 23:58:04.402913 containerd[1517]: time="2025-05-13T23:58:04.402866964Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:04.404143 containerd[1517]: time="2025-05-13T23:58:04.404097435Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.439489073s" May 13 23:58:04.404143 containerd[1517]: time="2025-05-13T23:58:04.404139254Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 23:58:04.406177 containerd[1517]: time="2025-05-13T23:58:04.406146463Z" level=info msg="CreateContainer within sandbox \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:58:04.421443 containerd[1517]: time="2025-05-13T23:58:04.421396240Z" level=info msg="CreateContainer within sandbox \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\"" May 13 23:58:04.421913 containerd[1517]: time="2025-05-13T23:58:04.421805753Z" level=info msg="StartContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\"" May 13 23:58:04.446881 systemd[1]: Started cri-containerd-499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5.scope - libcontainer container 499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5. May 13 23:58:04.649706 containerd[1517]: time="2025-05-13T23:58:04.647321442Z" level=info msg="StartContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" returns successfully" May 13 23:58:04.735997 kubelet[2603]: E0513 23:58:04.734681 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:04.737602 kubelet[2603]: E0513 23:58:04.737433 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:04.738515 containerd[1517]: time="2025-05-13T23:58:04.738459714Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:58:05.058097 containerd[1517]: time="2025-05-13T23:58:05.057593436Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\"" May 13 23:58:05.058505 containerd[1517]: time="2025-05-13T23:58:05.058264815Z" level=info msg="StartContainer for \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\"" May 13 23:58:05.094994 systemd[1]: run-containerd-runc-k8s.io-a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5-runc.bFXfgv.mount: Deactivated successfully. May 13 23:58:05.102846 systemd[1]: Started cri-containerd-a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5.scope - libcontainer container a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5. May 13 23:58:05.170418 systemd[1]: cri-containerd-a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5.scope: Deactivated successfully. May 13 23:58:05.172261 containerd[1517]: time="2025-05-13T23:58:05.172120120Z" level=info msg="StartContainer for \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\" returns successfully" May 13 23:58:05.198487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5-rootfs.mount: Deactivated successfully. May 13 23:58:05.204780 containerd[1517]: time="2025-05-13T23:58:05.204699970Z" level=info msg="shim disconnected" id=a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5 namespace=k8s.io May 13 23:58:05.204780 containerd[1517]: time="2025-05-13T23:58:05.204779852Z" level=warning msg="cleaning up after shim disconnected" id=a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5 namespace=k8s.io May 13 23:58:05.205030 containerd[1517]: time="2025-05-13T23:58:05.204788767Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:58:05.741706 kubelet[2603]: E0513 23:58:05.741445 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:05.741706 kubelet[2603]: E0513 23:58:05.741536 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:05.743571 containerd[1517]: time="2025-05-13T23:58:05.743529050Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:58:05.845054 kubelet[2603]: I0513 23:58:05.844986 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-27h4n" podStartSLOduration=2.67840655 podStartE2EDuration="14.844964718s" podCreationTimestamp="2025-05-13 23:57:51 +0000 UTC" firstStartedPulling="2025-05-13 23:57:52.238278663 +0000 UTC m=+7.651441022" lastFinishedPulling="2025-05-13 23:58:04.404836831 +0000 UTC m=+19.817999190" observedRunningTime="2025-05-13 23:58:05.069348817 +0000 UTC m=+20.482511176" watchObservedRunningTime="2025-05-13 23:58:05.844964718 +0000 UTC m=+21.258127077" May 13 23:58:06.162713 containerd[1517]: time="2025-05-13T23:58:06.162628337Z" level=info msg="CreateContainer within sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\"" May 13 23:58:06.163512 containerd[1517]: time="2025-05-13T23:58:06.163433021Z" level=info msg="StartContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\"" May 13 23:58:06.195023 systemd[1]: Started cri-containerd-839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846.scope - libcontainer container 839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846. May 13 23:58:06.293078 containerd[1517]: time="2025-05-13T23:58:06.293027787Z" level=info msg="StartContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" returns successfully" May 13 23:58:06.391115 kubelet[2603]: I0513 23:58:06.391066 2603 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:58:06.745376 kubelet[2603]: E0513 23:58:06.745346 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:07.021375 kubelet[2603]: I0513 23:58:07.021072 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hs8x4" podStartSLOduration=6.110038463 podStartE2EDuration="16.021053091s" podCreationTimestamp="2025-05-13 23:57:51 +0000 UTC" firstStartedPulling="2025-05-13 23:57:52.053300539 +0000 UTC m=+7.466462899" lastFinishedPulling="2025-05-13 23:58:01.964315167 +0000 UTC m=+17.377477527" observedRunningTime="2025-05-13 23:58:07.013420258 +0000 UTC m=+22.426582637" watchObservedRunningTime="2025-05-13 23:58:07.021053091 +0000 UTC m=+22.434215450" May 13 23:58:07.025618 systemd[1]: Created slice kubepods-burstable-podb6bd7588_ac51_4789_bf8d_60e6d4cfdcf3.slice - libcontainer container kubepods-burstable-podb6bd7588_ac51_4789_bf8d_60e6d4cfdcf3.slice. May 13 23:58:07.031166 systemd[1]: Created slice kubepods-burstable-poda4d26c06_7dfa_4d33_af02_43d1886bb004.slice - libcontainer container kubepods-burstable-poda4d26c06_7dfa_4d33_af02_43d1886bb004.slice. May 13 23:58:07.098915 kubelet[2603]: I0513 23:58:07.098838 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3-config-volume\") pod \"coredns-6f6b679f8f-dhz56\" (UID: \"b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3\") " pod="kube-system/coredns-6f6b679f8f-dhz56" May 13 23:58:07.098915 kubelet[2603]: I0513 23:58:07.098904 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqqg\" (UniqueName: \"kubernetes.io/projected/b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3-kube-api-access-gpqqg\") pod \"coredns-6f6b679f8f-dhz56\" (UID: \"b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3\") " pod="kube-system/coredns-6f6b679f8f-dhz56" May 13 23:58:07.099177 kubelet[2603]: I0513 23:58:07.098973 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wk2\" (UniqueName: \"kubernetes.io/projected/a4d26c06-7dfa-4d33-af02-43d1886bb004-kube-api-access-j6wk2\") pod \"coredns-6f6b679f8f-zcc7m\" (UID: \"a4d26c06-7dfa-4d33-af02-43d1886bb004\") " pod="kube-system/coredns-6f6b679f8f-zcc7m" May 13 23:58:07.099177 kubelet[2603]: I0513 23:58:07.099012 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4d26c06-7dfa-4d33-af02-43d1886bb004-config-volume\") pod \"coredns-6f6b679f8f-zcc7m\" (UID: \"a4d26c06-7dfa-4d33-af02-43d1886bb004\") " pod="kube-system/coredns-6f6b679f8f-zcc7m" May 13 23:58:07.329498 kubelet[2603]: E0513 23:58:07.329441 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:07.333861 kubelet[2603]: E0513 23:58:07.333816 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:07.337684 containerd[1517]: time="2025-05-13T23:58:07.337620090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zcc7m,Uid:a4d26c06-7dfa-4d33-af02-43d1886bb004,Namespace:kube-system,Attempt:0,}" May 13 23:58:07.339758 containerd[1517]: time="2025-05-13T23:58:07.339690115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dhz56,Uid:b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3,Namespace:kube-system,Attempt:0,}" May 13 23:58:07.747676 kubelet[2603]: E0513 23:58:07.747552 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:08.591032 systemd-networkd[1450]: cilium_host: Link UP May 13 23:58:08.591194 systemd-networkd[1450]: cilium_net: Link UP May 13 23:58:08.591199 systemd-networkd[1450]: cilium_net: Gained carrier May 13 23:58:08.591408 systemd-networkd[1450]: cilium_host: Gained carrier May 13 23:58:08.699845 systemd-networkd[1450]: cilium_host: Gained IPv6LL May 13 23:58:08.714223 systemd-networkd[1450]: cilium_vxlan: Link UP May 13 23:58:08.714232 systemd-networkd[1450]: cilium_vxlan: Gained carrier May 13 23:58:08.749498 kubelet[2603]: E0513 23:58:08.749457 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:08.942762 kernel: NET: Registered PF_ALG protocol family May 13 23:58:09.260928 systemd-networkd[1450]: cilium_net: Gained IPv6LL May 13 23:58:09.333112 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:52532.service - OpenSSH per-connection server daemon (10.0.0.1:52532). May 13 23:58:09.383684 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 52532 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:09.385426 sshd-session[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:09.390146 systemd-logind[1499]: New session 8 of user core. May 13 23:58:09.396858 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:58:09.523140 sshd[3675]: Connection closed by 10.0.0.1 port 52532 May 13 23:58:09.523629 sshd-session[3669]: pam_unix(sshd:session): session closed for user core May 13 23:58:09.528184 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:52532.service: Deactivated successfully. May 13 23:58:09.530308 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:58:09.531062 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. May 13 23:58:09.531948 systemd-logind[1499]: Removed session 8. May 13 23:58:09.681695 systemd-networkd[1450]: lxc_health: Link UP May 13 23:58:09.683652 systemd-networkd[1450]: lxc_health: Gained carrier May 13 23:58:09.962699 kubelet[2603]: E0513 23:58:09.962646 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:09.990784 kernel: eth0: renamed from tmp44aaa May 13 23:58:09.998139 systemd-networkd[1450]: lxcab47dee8c888: Link UP May 13 23:58:10.006854 kernel: eth0: renamed from tmpe8564 May 13 23:58:10.013175 systemd-networkd[1450]: lxcc5b2279439e1: Link UP May 13 23:58:10.013491 systemd-networkd[1450]: lxcab47dee8c888: Gained carrier May 13 23:58:10.014568 systemd-networkd[1450]: lxcc5b2279439e1: Gained carrier May 13 23:58:10.543897 systemd-networkd[1450]: cilium_vxlan: Gained IPv6LL May 13 23:58:11.377816 systemd-networkd[1450]: lxcab47dee8c888: Gained IPv6LL May 13 23:58:11.563912 systemd-networkd[1450]: lxcc5b2279439e1: Gained IPv6LL May 13 23:58:11.693280 systemd-networkd[1450]: lxc_health: Gained IPv6LL May 13 23:58:13.590595 containerd[1517]: time="2025-05-13T23:58:13.590251140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:58:13.590595 containerd[1517]: time="2025-05-13T23:58:13.590357961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:58:13.590595 containerd[1517]: time="2025-05-13T23:58:13.590369320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:58:13.590595 containerd[1517]: time="2025-05-13T23:58:13.590479187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:58:13.613883 systemd[1]: Started cri-containerd-e8564f838e18f83af2c91821eaafc9d22782a27f038009fadf07070fb6c3e335.scope - libcontainer container e8564f838e18f83af2c91821eaafc9d22782a27f038009fadf07070fb6c3e335. May 13 23:58:13.622505 containerd[1517]: time="2025-05-13T23:58:13.621228280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:58:13.622505 containerd[1517]: time="2025-05-13T23:58:13.621292959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:58:13.622505 containerd[1517]: time="2025-05-13T23:58:13.621305321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:58:13.622505 containerd[1517]: time="2025-05-13T23:58:13.621389664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:58:13.633195 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:58:13.647938 systemd[1]: Started cri-containerd-44aaab98d2543d5e628bb11bcf101aa04e32624e6f957418a9a5e55d08ee6fe6.scope - libcontainer container 44aaab98d2543d5e628bb11bcf101aa04e32624e6f957418a9a5e55d08ee6fe6. May 13 23:58:13.660392 containerd[1517]: time="2025-05-13T23:58:13.660334527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dhz56,Uid:b6bd7588-ac51-4789-bf8d-60e6d4cfdcf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8564f838e18f83af2c91821eaafc9d22782a27f038009fadf07070fb6c3e335\"" May 13 23:58:13.661065 kubelet[2603]: E0513 23:58:13.661035 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:13.661631 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:58:13.664434 containerd[1517]: time="2025-05-13T23:58:13.664394182Z" level=info msg="CreateContainer within sandbox \"e8564f838e18f83af2c91821eaafc9d22782a27f038009fadf07070fb6c3e335\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:58:13.685138 containerd[1517]: time="2025-05-13T23:58:13.685103868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zcc7m,Uid:a4d26c06-7dfa-4d33-af02-43d1886bb004,Namespace:kube-system,Attempt:0,} returns sandbox id \"44aaab98d2543d5e628bb11bcf101aa04e32624e6f957418a9a5e55d08ee6fe6\"" May 13 23:58:13.685969 kubelet[2603]: E0513 23:58:13.685949 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:13.687574 containerd[1517]: time="2025-05-13T23:58:13.687535164Z" level=info msg="CreateContainer within sandbox \"44aaab98d2543d5e628bb11bcf101aa04e32624e6f957418a9a5e55d08ee6fe6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:58:14.537886 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:50594.service - OpenSSH per-connection server daemon (10.0.0.1:50594). May 13 23:58:14.658665 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:14.660557 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:14.665066 systemd-logind[1499]: New session 9 of user core. May 13 23:58:14.672859 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:58:14.809270 sshd[3942]: Connection closed by 10.0.0.1 port 50594 May 13 23:58:14.809651 sshd-session[3940]: pam_unix(sshd:session): session closed for user core May 13 23:58:14.813770 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:50594.service: Deactivated successfully. May 13 23:58:14.816277 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:58:14.817064 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. May 13 23:58:14.818015 systemd-logind[1499]: Removed session 9. May 13 23:58:14.923011 containerd[1517]: time="2025-05-13T23:58:14.922949261Z" level=info msg="CreateContainer within sandbox \"44aaab98d2543d5e628bb11bcf101aa04e32624e6f957418a9a5e55d08ee6fe6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fad4eda85ba5abd7fe1b8fe12de1755e9bc0041243fca3732f877fae9f33766c\"" May 13 23:58:14.923547 containerd[1517]: time="2025-05-13T23:58:14.923524261Z" level=info msg="StartContainer for \"fad4eda85ba5abd7fe1b8fe12de1755e9bc0041243fca3732f877fae9f33766c\"" May 13 23:58:14.952868 systemd[1]: Started cri-containerd-fad4eda85ba5abd7fe1b8fe12de1755e9bc0041243fca3732f877fae9f33766c.scope - libcontainer container fad4eda85ba5abd7fe1b8fe12de1755e9bc0041243fca3732f877fae9f33766c. May 13 23:58:15.251849 containerd[1517]: time="2025-05-13T23:58:15.251794368Z" level=info msg="StartContainer for \"fad4eda85ba5abd7fe1b8fe12de1755e9bc0041243fca3732f877fae9f33766c\" returns successfully" May 13 23:58:15.251849 containerd[1517]: time="2025-05-13T23:58:15.251838133Z" level=info msg="CreateContainer within sandbox \"e8564f838e18f83af2c91821eaafc9d22782a27f038009fadf07070fb6c3e335\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e216aa4c12b9cfd99a2b6abc96a381b6c9fa77b71e6e858ec0c910e9d08b2a9\"" May 13 23:58:15.252559 containerd[1517]: time="2025-05-13T23:58:15.252519690Z" level=info msg="StartContainer for \"1e216aa4c12b9cfd99a2b6abc96a381b6c9fa77b71e6e858ec0c910e9d08b2a9\"" May 13 23:58:15.282875 systemd[1]: Started cri-containerd-1e216aa4c12b9cfd99a2b6abc96a381b6c9fa77b71e6e858ec0c910e9d08b2a9.scope - libcontainer container 1e216aa4c12b9cfd99a2b6abc96a381b6c9fa77b71e6e858ec0c910e9d08b2a9. May 13 23:58:15.425113 containerd[1517]: time="2025-05-13T23:58:15.425031598Z" level=info msg="StartContainer for \"1e216aa4c12b9cfd99a2b6abc96a381b6c9fa77b71e6e858ec0c910e9d08b2a9\" returns successfully" May 13 23:58:15.539353 kubelet[2603]: I0513 23:58:15.539209 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:58:15.539995 kubelet[2603]: E0513 23:58:15.539820 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.798948 kubelet[2603]: E0513 23:58:15.798713 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.801710 kubelet[2603]: E0513 23:58:15.801686 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.801710 kubelet[2603]: E0513 23:58:15.801697 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.937143 kubelet[2603]: I0513 23:58:15.937072 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dhz56" podStartSLOduration=24.937046261 podStartE2EDuration="24.937046261s" podCreationTimestamp="2025-05-13 23:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:15.926357115 +0000 UTC m=+31.339519474" watchObservedRunningTime="2025-05-13 23:58:15.937046261 +0000 UTC m=+31.350208620" May 13 23:58:15.975367 kubelet[2603]: I0513 23:58:15.975091 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zcc7m" podStartSLOduration=24.975072742000002 podStartE2EDuration="24.975072742s" podCreationTimestamp="2025-05-13 23:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:15.937253055 +0000 UTC m=+31.350415414" watchObservedRunningTime="2025-05-13 23:58:15.975072742 +0000 UTC m=+31.388235101" May 13 23:58:16.803303 kubelet[2603]: E0513 23:58:16.803062 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:16.803303 kubelet[2603]: E0513 23:58:16.803140 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:17.804351 kubelet[2603]: E0513 23:58:17.804300 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:17.804351 kubelet[2603]: E0513 23:58:17.804334 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:18.806412 kubelet[2603]: E0513 23:58:18.806362 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:19.825301 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). May 13 23:58:19.874431 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:19.876174 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:19.880698 systemd-logind[1499]: New session 10 of user core. May 13 23:58:19.889853 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:58:20.010613 sshd[4047]: Connection closed by 10.0.0.1 port 50606 May 13 23:58:20.011026 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 13 23:58:20.015341 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:50606.service: Deactivated successfully. May 13 23:58:20.017664 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:58:20.018458 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. May 13 23:58:20.019569 systemd-logind[1499]: Removed session 10. May 13 23:58:25.023142 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:44594.service - OpenSSH per-connection server daemon (10.0.0.1:44594). May 13 23:58:25.066098 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 44594 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:25.067592 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:25.071788 systemd-logind[1499]: New session 11 of user core. May 13 23:58:25.082870 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:58:25.206783 sshd[4066]: Connection closed by 10.0.0.1 port 44594 May 13 23:58:25.207189 sshd-session[4064]: pam_unix(sshd:session): session closed for user core May 13 23:58:25.211373 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:44594.service: Deactivated successfully. May 13 23:58:25.214181 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:58:25.215076 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. May 13 23:58:25.216203 systemd-logind[1499]: Removed session 11. May 13 23:58:30.221510 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:44604.service - OpenSSH per-connection server daemon (10.0.0.1:44604). May 13 23:58:30.264682 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 44604 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:30.266094 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:30.270180 systemd-logind[1499]: New session 12 of user core. May 13 23:58:30.279840 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:58:30.388706 sshd[4082]: Connection closed by 10.0.0.1 port 44604 May 13 23:58:30.389098 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 13 23:58:30.402780 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:44604.service: Deactivated successfully. May 13 23:58:30.404768 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:58:30.406256 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. May 13 23:58:30.414072 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:44612.service - OpenSSH per-connection server daemon (10.0.0.1:44612). May 13 23:58:30.415028 systemd-logind[1499]: Removed session 12. May 13 23:58:30.453087 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 44612 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:30.454420 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:30.458653 systemd-logind[1499]: New session 13 of user core. May 13 23:58:30.465869 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:58:30.615297 sshd[4098]: Connection closed by 10.0.0.1 port 44612 May 13 23:58:30.616434 sshd-session[4095]: pam_unix(sshd:session): session closed for user core May 13 23:58:30.634970 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:44612.service: Deactivated successfully. May 13 23:58:30.640133 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:58:30.641225 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. May 13 23:58:30.653123 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:44624.service - OpenSSH per-connection server daemon (10.0.0.1:44624). May 13 23:58:30.654262 systemd-logind[1499]: Removed session 13. May 13 23:58:30.695638 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 44624 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:30.697445 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:30.702212 systemd-logind[1499]: New session 14 of user core. May 13 23:58:30.717881 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:58:30.842625 sshd[4111]: Connection closed by 10.0.0.1 port 44624 May 13 23:58:30.843045 sshd-session[4108]: pam_unix(sshd:session): session closed for user core May 13 23:58:30.847348 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:44624.service: Deactivated successfully. May 13 23:58:30.849400 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:58:30.850192 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. May 13 23:58:30.851382 systemd-logind[1499]: Removed session 14. May 13 23:58:35.863987 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). May 13 23:58:35.910145 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:35.912033 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:35.917515 systemd-logind[1499]: New session 15 of user core. May 13 23:58:35.926864 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:58:36.049637 sshd[4126]: Connection closed by 10.0.0.1 port 37398 May 13 23:58:36.050022 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 13 23:58:36.054373 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:37398.service: Deactivated successfully. May 13 23:58:36.056757 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:58:36.057515 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. May 13 23:58:36.058428 systemd-logind[1499]: Removed session 15. May 13 23:58:41.061925 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:37402.service - OpenSSH per-connection server daemon (10.0.0.1:37402). May 13 23:58:41.105802 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 37402 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:41.107401 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:41.111764 systemd-logind[1499]: New session 16 of user core. May 13 23:58:41.121858 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:58:41.264057 sshd[4143]: Connection closed by 10.0.0.1 port 37402 May 13 23:58:41.264487 sshd-session[4141]: pam_unix(sshd:session): session closed for user core May 13 23:58:41.269981 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:37402.service: Deactivated successfully. May 13 23:58:41.272348 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:58:41.273069 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. May 13 23:58:41.273944 systemd-logind[1499]: Removed session 16. May 13 23:58:46.276506 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:53026.service - OpenSSH per-connection server daemon (10.0.0.1:53026). May 13 23:58:46.319687 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 53026 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:46.321397 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:46.325989 systemd-logind[1499]: New session 17 of user core. May 13 23:58:46.339863 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:58:46.448528 sshd[4162]: Connection closed by 10.0.0.1 port 53026 May 13 23:58:46.449177 sshd-session[4160]: pam_unix(sshd:session): session closed for user core May 13 23:58:46.460189 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:53026.service: Deactivated successfully. May 13 23:58:46.462891 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:58:46.465301 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. May 13 23:58:46.484081 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:53034.service - OpenSSH per-connection server daemon (10.0.0.1:53034). May 13 23:58:46.485164 systemd-logind[1499]: Removed session 17. May 13 23:58:46.524228 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 53034 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:46.525861 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:46.530268 systemd-logind[1499]: New session 18 of user core. May 13 23:58:46.542857 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:58:47.192889 sshd[4178]: Connection closed by 10.0.0.1 port 53034 May 13 23:58:47.193363 sshd-session[4175]: pam_unix(sshd:session): session closed for user core May 13 23:58:47.205980 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:53034.service: Deactivated successfully. May 13 23:58:47.208162 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:58:47.209937 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. May 13 23:58:47.218054 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). May 13 23:58:47.219131 systemd-logind[1499]: Removed session 18. May 13 23:58:47.262679 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:47.264266 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:47.269091 systemd-logind[1499]: New session 19 of user core. May 13 23:58:47.278869 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:58:48.392384 sshd[4192]: Connection closed by 10.0.0.1 port 53038 May 13 23:58:48.393509 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 13 23:58:48.404126 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:53038.service: Deactivated successfully. May 13 23:58:48.408390 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:58:48.409962 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. May 13 23:58:48.418507 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). May 13 23:58:48.419501 systemd-logind[1499]: Removed session 19. May 13 23:58:48.457643 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:48.459177 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:48.464000 systemd-logind[1499]: New session 20 of user core. May 13 23:58:48.472857 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:58:48.704518 sshd[4215]: Connection closed by 10.0.0.1 port 53052 May 13 23:58:48.705427 sshd-session[4212]: pam_unix(sshd:session): session closed for user core May 13 23:58:48.717814 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:53052.service: Deactivated successfully. May 13 23:58:48.720051 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:58:48.721902 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. May 13 23:58:48.727994 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:53066.service - OpenSSH per-connection server daemon (10.0.0.1:53066). May 13 23:58:48.729183 systemd-logind[1499]: Removed session 20. May 13 23:58:48.767147 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 53066 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:48.768698 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:48.773256 systemd-logind[1499]: New session 21 of user core. May 13 23:58:48.782854 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:58:48.901618 sshd[4228]: Connection closed by 10.0.0.1 port 53066 May 13 23:58:48.902030 sshd-session[4225]: pam_unix(sshd:session): session closed for user core May 13 23:58:48.906566 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:53066.service: Deactivated successfully. May 13 23:58:48.908874 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:58:48.909581 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. May 13 23:58:48.910527 systemd-logind[1499]: Removed session 21. May 13 23:58:53.913817 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962). May 13 23:58:53.956103 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:53.957564 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:53.961786 systemd-logind[1499]: New session 22 of user core. May 13 23:58:53.968865 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:58:54.073346 sshd[4248]: Connection closed by 10.0.0.1 port 46962 May 13 23:58:54.073693 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 13 23:58:54.077590 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:46962.service: Deactivated successfully. May 13 23:58:54.079966 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:58:54.080805 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. May 13 23:58:54.081614 systemd-logind[1499]: Removed session 22. May 13 23:58:58.670155 kubelet[2603]: E0513 23:58:58.670091 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:59.087040 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:46972.service - OpenSSH per-connection server daemon (10.0.0.1:46972). May 13 23:58:59.130796 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 46972 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:58:59.132473 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:59.136994 systemd-logind[1499]: New session 23 of user core. May 13 23:58:59.148855 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:58:59.260202 sshd[4263]: Connection closed by 10.0.0.1 port 46972 May 13 23:58:59.260631 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 13 23:58:59.265274 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:46972.service: Deactivated successfully. May 13 23:58:59.267562 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:58:59.268453 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. May 13 23:58:59.269300 systemd-logind[1499]: Removed session 23. May 13 23:59:04.272936 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:54310.service - OpenSSH per-connection server daemon (10.0.0.1:54310). May 13 23:59:04.316526 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 54310 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:04.318148 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.323848 systemd-logind[1499]: New session 24 of user core. May 13 23:59:04.331918 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:59:04.439448 sshd[4278]: Connection closed by 10.0.0.1 port 54310 May 13 23:59:04.439866 sshd-session[4276]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.443809 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:54310.service: Deactivated successfully. May 13 23:59:04.446066 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:59:04.446823 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. May 13 23:59:04.447801 systemd-logind[1499]: Removed session 24. May 13 23:59:05.669515 kubelet[2603]: E0513 23:59:05.669463 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:09.452558 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:54318.service - OpenSSH per-connection server daemon (10.0.0.1:54318). May 13 23:59:09.495797 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 54318 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:09.497394 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:09.501670 systemd-logind[1499]: New session 25 of user core. May 13 23:59:09.515900 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:59:09.618451 sshd[4294]: Connection closed by 10.0.0.1 port 54318 May 13 23:59:09.618849 sshd-session[4292]: pam_unix(sshd:session): session closed for user core May 13 23:59:09.633929 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:54318.service: Deactivated successfully. May 13 23:59:09.636331 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:59:09.638314 systemd-logind[1499]: Session 25 logged out. Waiting for processes to exit. May 13 23:59:09.639685 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:54328.service - OpenSSH per-connection server daemon (10.0.0.1:54328). May 13 23:59:09.640660 systemd-logind[1499]: Removed session 25. May 13 23:59:09.687859 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 54328 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:09.689265 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:09.693529 systemd-logind[1499]: New session 26 of user core. May 13 23:59:09.700846 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:59:11.057820 containerd[1517]: time="2025-05-13T23:59:11.057761416Z" level=info msg="StopContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" with timeout 30 (s)" May 13 23:59:11.058428 containerd[1517]: time="2025-05-13T23:59:11.058201600Z" level=info msg="Stop container \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" with signal terminated" May 13 23:59:11.071117 systemd[1]: cri-containerd-499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5.scope: Deactivated successfully. May 13 23:59:11.093409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5-rootfs.mount: Deactivated successfully. May 13 23:59:11.093939 containerd[1517]: time="2025-05-13T23:59:11.093670586Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:59:11.096059 containerd[1517]: time="2025-05-13T23:59:11.096028684Z" level=info msg="StopContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" with timeout 2 (s)" May 13 23:59:11.096281 containerd[1517]: time="2025-05-13T23:59:11.096232120Z" level=info msg="Stop container \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" with signal terminated" May 13 23:59:11.101193 containerd[1517]: time="2025-05-13T23:59:11.101126134Z" level=info msg="shim disconnected" id=499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5 namespace=k8s.io May 13 23:59:11.101193 containerd[1517]: time="2025-05-13T23:59:11.101180004Z" level=warning msg="cleaning up after shim disconnected" id=499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5 namespace=k8s.io May 13 23:59:11.101193 containerd[1517]: time="2025-05-13T23:59:11.101190874Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:11.104220 systemd-networkd[1450]: lxc_health: Link DOWN May 13 23:59:11.104632 systemd-networkd[1450]: lxc_health: Lost carrier May 13 23:59:11.120654 containerd[1517]: time="2025-05-13T23:59:11.120609139Z" level=info msg="StopContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" returns successfully" May 13 23:59:11.125342 containerd[1517]: time="2025-05-13T23:59:11.125309926Z" level=info msg="StopPodSandbox for \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\"" May 13 23:59:11.126291 systemd[1]: cri-containerd-839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846.scope: Deactivated successfully. May 13 23:59:11.126683 systemd[1]: cri-containerd-839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846.scope: Consumed 6.937s CPU time, 125.4M memory peak, 220K read from disk, 13.3M written to disk. May 13 23:59:11.135195 containerd[1517]: time="2025-05-13T23:59:11.125352265Z" level=info msg="Container to stop \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.137987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f-shm.mount: Deactivated successfully. May 13 23:59:11.143114 systemd[1]: cri-containerd-ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f.scope: Deactivated successfully. May 13 23:59:11.150634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846-rootfs.mount: Deactivated successfully. May 13 23:59:11.159436 containerd[1517]: time="2025-05-13T23:59:11.159372664Z" level=info msg="shim disconnected" id=839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846 namespace=k8s.io May 13 23:59:11.159436 containerd[1517]: time="2025-05-13T23:59:11.159431704Z" level=warning msg="cleaning up after shim disconnected" id=839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846 namespace=k8s.io May 13 23:59:11.159436 containerd[1517]: time="2025-05-13T23:59:11.159444457Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:11.166909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f-rootfs.mount: Deactivated successfully. May 13 23:59:11.168246 containerd[1517]: time="2025-05-13T23:59:11.168183918Z" level=info msg="shim disconnected" id=ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f namespace=k8s.io May 13 23:59:11.168246 containerd[1517]: time="2025-05-13T23:59:11.168240392Z" level=warning msg="cleaning up after shim disconnected" id=ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f namespace=k8s.io May 13 23:59:11.168246 containerd[1517]: time="2025-05-13T23:59:11.168248938Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:11.179238 containerd[1517]: time="2025-05-13T23:59:11.179185268Z" level=warning msg="cleanup warnings time=\"2025-05-13T23:59:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 23:59:11.184467 containerd[1517]: time="2025-05-13T23:59:11.184422567Z" level=info msg="TearDown network for sandbox \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\" successfully" May 13 23:59:11.184467 containerd[1517]: time="2025-05-13T23:59:11.184450458Z" level=info msg="StopPodSandbox for \"ff9db9ed4b2ca02ded2e307c33ca701c17a0738b001793e7b33875bbe239201f\" returns successfully" May 13 23:59:11.184782 containerd[1517]: time="2025-05-13T23:59:11.184675865Z" level=info msg="StopContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" returns successfully" May 13 23:59:11.185164 containerd[1517]: time="2025-05-13T23:59:11.185136085Z" level=info msg="StopPodSandbox for \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\"" May 13 23:59:11.185225 containerd[1517]: time="2025-05-13T23:59:11.185162003Z" level=info msg="Container to stop \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.185225 containerd[1517]: time="2025-05-13T23:59:11.185192679Z" level=info msg="Container to stop \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.185225 containerd[1517]: time="2025-05-13T23:59:11.185202397Z" level=info msg="Container to stop \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.185225 containerd[1517]: time="2025-05-13T23:59:11.185210553Z" level=info msg="Container to stop \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.185225 containerd[1517]: time="2025-05-13T23:59:11.185219249Z" level=info msg="Container to stop \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:11.191386 systemd[1]: cri-containerd-ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a.scope: Deactivated successfully. May 13 23:59:11.214404 containerd[1517]: time="2025-05-13T23:59:11.214334714Z" level=info msg="shim disconnected" id=ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a namespace=k8s.io May 13 23:59:11.214404 containerd[1517]: time="2025-05-13T23:59:11.214396087Z" level=warning msg="cleaning up after shim disconnected" id=ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a namespace=k8s.io May 13 23:59:11.214404 containerd[1517]: time="2025-05-13T23:59:11.214404613Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:11.231670 containerd[1517]: time="2025-05-13T23:59:11.231563032Z" level=info msg="TearDown network for sandbox \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" successfully" May 13 23:59:11.231670 containerd[1517]: time="2025-05-13T23:59:11.231597305Z" level=info msg="StopPodSandbox for \"ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a\" returns successfully" May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346751 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-lib-modules\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346790 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-cgroup\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346814 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hubble-tls\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346830 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5s9k\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-kube-api-access-w5s9k\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346846 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-run\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.346844 kubelet[2603]: I0513 23:59:11.346851 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346863 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-config-path\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346905 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-xtables-lock\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346923 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-bpf-maps\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346940 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac50abef-6ad4-40fb-9ac8-24f745cd4755-cilium-config-path\") pod \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\" (UID: \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\") " May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346955 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfdx4\" (UniqueName: \"kubernetes.io/projected/ac50abef-6ad4-40fb-9ac8-24f745cd4755-kube-api-access-gfdx4\") pod \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\" (UID: \"ac50abef-6ad4-40fb-9ac8-24f745cd4755\") " May 13 23:59:11.347530 kubelet[2603]: I0513 23:59:11.346968 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-net\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.346980 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-etc-cni-netd\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.346993 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-kernel\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.347007 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cni-path\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.347020 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hostproc\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.347036 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7896f9a0-04dc-4bdf-9417-b5b711ff4829-clustermesh-secrets\") pod \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\" (UID: \"7896f9a0-04dc-4bdf-9417-b5b711ff4829\") " May 13 23:59:11.347965 kubelet[2603]: I0513 23:59:11.347060 2603 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.349213 kubelet[2603]: I0513 23:59:11.346857 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349272 kubelet[2603]: I0513 23:59:11.347475 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349272 kubelet[2603]: I0513 23:59:11.348707 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349272 kubelet[2603]: I0513 23:59:11.348750 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349272 kubelet[2603]: I0513 23:59:11.348763 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349272 kubelet[2603]: I0513 23:59:11.348776 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cni-path" (OuterVolumeSpecName: "cni-path") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349398 kubelet[2603]: I0513 23:59:11.348789 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hostproc" (OuterVolumeSpecName: "hostproc") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349398 kubelet[2603]: I0513 23:59:11.349176 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.349398 kubelet[2603]: I0513 23:59:11.349278 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:11.350740 kubelet[2603]: I0513 23:59:11.350432 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:59:11.350740 kubelet[2603]: I0513 23:59:11.350683 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac50abef-6ad4-40fb-9ac8-24f745cd4755-kube-api-access-gfdx4" (OuterVolumeSpecName: "kube-api-access-gfdx4") pod "ac50abef-6ad4-40fb-9ac8-24f745cd4755" (UID: "ac50abef-6ad4-40fb-9ac8-24f745cd4755"). InnerVolumeSpecName "kube-api-access-gfdx4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:11.351649 kubelet[2603]: I0513 23:59:11.351606 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-kube-api-access-w5s9k" (OuterVolumeSpecName: "kube-api-access-w5s9k") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "kube-api-access-w5s9k". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:11.353039 kubelet[2603]: I0513 23:59:11.353005 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7896f9a0-04dc-4bdf-9417-b5b711ff4829-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:59:11.353039 kubelet[2603]: I0513 23:59:11.353027 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7896f9a0-04dc-4bdf-9417-b5b711ff4829" (UID: "7896f9a0-04dc-4bdf-9417-b5b711ff4829"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:11.353217 kubelet[2603]: I0513 23:59:11.353132 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac50abef-6ad4-40fb-9ac8-24f745cd4755-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac50abef-6ad4-40fb-9ac8-24f745cd4755" (UID: "ac50abef-6ad4-40fb-9ac8-24f745cd4755"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:59:11.447355 kubelet[2603]: I0513 23:59:11.447313 2603 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447355 kubelet[2603]: I0513 23:59:11.447352 2603 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac50abef-6ad4-40fb-9ac8-24f745cd4755-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447364 2603 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gfdx4\" (UniqueName: \"kubernetes.io/projected/ac50abef-6ad4-40fb-9ac8-24f745cd4755-kube-api-access-gfdx4\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447375 2603 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447384 2603 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447395 2603 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447403 2603 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447411 2603 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447419 2603 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7896f9a0-04dc-4bdf-9417-b5b711ff4829-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447457 kubelet[2603]: I0513 23:59:11.447427 2603 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447649 kubelet[2603]: I0513 23:59:11.447435 2603 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447649 kubelet[2603]: I0513 23:59:11.447443 2603 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w5s9k\" (UniqueName: \"kubernetes.io/projected/7896f9a0-04dc-4bdf-9417-b5b711ff4829-kube-api-access-w5s9k\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447649 kubelet[2603]: I0513 23:59:11.447452 2603 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447649 kubelet[2603]: I0513 23:59:11.447460 2603 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7896f9a0-04dc-4bdf-9417-b5b711ff4829-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.447649 kubelet[2603]: I0513 23:59:11.447468 2603 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7896f9a0-04dc-4bdf-9417-b5b711ff4829-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:59:11.920123 kubelet[2603]: I0513 23:59:11.920090 2603 scope.go:117] "RemoveContainer" containerID="839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846" May 13 23:59:11.926937 containerd[1517]: time="2025-05-13T23:59:11.926877735Z" level=info msg="RemoveContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\"" May 13 23:59:11.927086 systemd[1]: Removed slice kubepods-besteffort-podac50abef_6ad4_40fb_9ac8_24f745cd4755.slice - libcontainer container kubepods-besteffort-podac50abef_6ad4_40fb_9ac8_24f745cd4755.slice. May 13 23:59:11.928315 systemd[1]: Removed slice kubepods-burstable-pod7896f9a0_04dc_4bdf_9417_b5b711ff4829.slice - libcontainer container kubepods-burstable-pod7896f9a0_04dc_4bdf_9417_b5b711ff4829.slice. May 13 23:59:11.928406 systemd[1]: kubepods-burstable-pod7896f9a0_04dc_4bdf_9417_b5b711ff4829.slice: Consumed 7.055s CPU time, 125.8M memory peak, 236K read from disk, 13.3M written to disk. May 13 23:59:11.931190 containerd[1517]: time="2025-05-13T23:59:11.931146213Z" level=info msg="RemoveContainer for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" returns successfully" May 13 23:59:11.931454 kubelet[2603]: I0513 23:59:11.931429 2603 scope.go:117] "RemoveContainer" containerID="a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5" May 13 23:59:11.932360 containerd[1517]: time="2025-05-13T23:59:11.932334479Z" level=info msg="RemoveContainer for \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\"" May 13 23:59:11.936066 containerd[1517]: time="2025-05-13T23:59:11.936033816Z" level=info msg="RemoveContainer for \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\" returns successfully" May 13 23:59:11.936257 kubelet[2603]: I0513 23:59:11.936214 2603 scope.go:117] "RemoveContainer" containerID="c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f" May 13 23:59:11.937331 containerd[1517]: time="2025-05-13T23:59:11.937073587Z" level=info msg="RemoveContainer for \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\"" May 13 23:59:11.940467 containerd[1517]: time="2025-05-13T23:59:11.940433756Z" level=info msg="RemoveContainer for \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\" returns successfully" May 13 23:59:11.940664 kubelet[2603]: I0513 23:59:11.940615 2603 scope.go:117] "RemoveContainer" containerID="1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07" May 13 23:59:11.941551 containerd[1517]: time="2025-05-13T23:59:11.941516647Z" level=info msg="RemoveContainer for \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\"" May 13 23:59:11.945082 containerd[1517]: time="2025-05-13T23:59:11.945035050Z" level=info msg="RemoveContainer for \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\" returns successfully" May 13 23:59:11.945249 kubelet[2603]: I0513 23:59:11.945206 2603 scope.go:117] "RemoveContainer" containerID="143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d" May 13 23:59:11.946609 containerd[1517]: time="2025-05-13T23:59:11.946438413Z" level=info msg="RemoveContainer for \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\"" May 13 23:59:11.951036 containerd[1517]: time="2025-05-13T23:59:11.950887725Z" level=info msg="RemoveContainer for \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\" returns successfully" May 13 23:59:11.951231 kubelet[2603]: I0513 23:59:11.951101 2603 scope.go:117] "RemoveContainer" containerID="839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846" May 13 23:59:11.951324 containerd[1517]: time="2025-05-13T23:59:11.951298745Z" level=error msg="ContainerStatus for \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\": not found" May 13 23:59:11.978596 kubelet[2603]: E0513 23:59:11.978550 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\": not found" containerID="839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846" May 13 23:59:11.978862 kubelet[2603]: I0513 23:59:11.978599 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846"} err="failed to get container status \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\": rpc error: code = NotFound desc = an error occurred when try to find container \"839da0d9c065c48cc7dd682f7fddf2cf6dff6a0c100f7907526d0e09d037f846\": not found" May 13 23:59:11.978862 kubelet[2603]: I0513 23:59:11.978686 2603 scope.go:117] "RemoveContainer" containerID="a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5" May 13 23:59:11.979024 containerd[1517]: time="2025-05-13T23:59:11.978965362Z" level=error msg="ContainerStatus for \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\": not found" May 13 23:59:11.979203 kubelet[2603]: E0513 23:59:11.979183 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\": not found" containerID="a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5" May 13 23:59:11.979276 kubelet[2603]: I0513 23:59:11.979204 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5"} err="failed to get container status \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a95b12b4595507cda2fda802a34506f76cb44e6b204c5072be471dc2dbd7a2a5\": not found" May 13 23:59:11.979276 kubelet[2603]: I0513 23:59:11.979218 2603 scope.go:117] "RemoveContainer" containerID="c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f" May 13 23:59:11.979491 kubelet[2603]: E0513 23:59:11.979476 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\": not found" containerID="c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f" May 13 23:59:11.979530 containerd[1517]: time="2025-05-13T23:59:11.979378125Z" level=error msg="ContainerStatus for \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\": not found" May 13 23:59:11.979575 kubelet[2603]: I0513 23:59:11.979493 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f"} err="failed to get container status \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c14b8f6cb1bb2a0ddade18bf1cf3e14258737d01e82598381002da896a2f620f\": not found" May 13 23:59:11.979575 kubelet[2603]: I0513 23:59:11.979508 2603 scope.go:117] "RemoveContainer" containerID="1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07" May 13 23:59:11.979664 containerd[1517]: time="2025-05-13T23:59:11.979625062Z" level=error msg="ContainerStatus for \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\": not found" May 13 23:59:11.979800 kubelet[2603]: E0513 23:59:11.979747 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\": not found" containerID="1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07" May 13 23:59:11.979800 kubelet[2603]: I0513 23:59:11.979777 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07"} err="failed to get container status \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fcdfc618156be7156b021480b8a9febf38b12fed7da020f7d9a728e2b3dda07\": not found" May 13 23:59:11.979800 kubelet[2603]: I0513 23:59:11.979790 2603 scope.go:117] "RemoveContainer" containerID="143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d" May 13 23:59:11.979954 containerd[1517]: time="2025-05-13T23:59:11.979928852Z" level=error msg="ContainerStatus for \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\": not found" May 13 23:59:11.980059 kubelet[2603]: E0513 23:59:11.980039 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\": not found" containerID="143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d" May 13 23:59:11.980099 kubelet[2603]: I0513 23:59:11.980059 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d"} err="failed to get container status \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\": rpc error: code = NotFound desc = an error occurred when try to find container \"143ecf7f91d0831696557bbfb478542a4b72fb79d5965a876f8dba7c2763562d\": not found" May 13 23:59:11.980099 kubelet[2603]: I0513 23:59:11.980073 2603 scope.go:117] "RemoveContainer" containerID="499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5" May 13 23:59:11.981106 containerd[1517]: time="2025-05-13T23:59:11.981080761Z" level=info msg="RemoveContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\"" May 13 23:59:11.984256 containerd[1517]: time="2025-05-13T23:59:11.984211316Z" level=info msg="RemoveContainer for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" returns successfully" May 13 23:59:11.984382 kubelet[2603]: I0513 23:59:11.984351 2603 scope.go:117] "RemoveContainer" containerID="499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5" May 13 23:59:11.984554 containerd[1517]: time="2025-05-13T23:59:11.984523764Z" level=error msg="ContainerStatus for \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\": not found" May 13 23:59:11.984706 kubelet[2603]: E0513 23:59:11.984669 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\": not found" containerID="499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5" May 13 23:59:11.984706 kubelet[2603]: I0513 23:59:11.984693 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5"} err="failed to get container status \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"499cfd2656c481f6548048ec913dbc02e336257245ceeae345f202c11922b0d5\": not found" May 13 23:59:12.074374 systemd[1]: var-lib-kubelet-pods-ac50abef\x2d6ad4\x2d40fb\x2d9ac8\x2d24f745cd4755-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfdx4.mount: Deactivated successfully. May 13 23:59:12.074499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a-rootfs.mount: Deactivated successfully. May 13 23:59:12.074583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac3d23c66a181d603016c4430ddaacdda1f2bebcf49271dd6f6e71eaeb71d28a-shm.mount: Deactivated successfully. May 13 23:59:12.074693 systemd[1]: var-lib-kubelet-pods-7896f9a0\x2d04dc\x2d4bdf\x2d9417\x2db5b711ff4829-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5s9k.mount: Deactivated successfully. May 13 23:59:12.074799 systemd[1]: var-lib-kubelet-pods-7896f9a0\x2d04dc\x2d4bdf\x2d9417\x2db5b711ff4829-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:59:12.074883 systemd[1]: var-lib-kubelet-pods-7896f9a0\x2d04dc\x2d4bdf\x2d9417\x2db5b711ff4829-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:59:12.671792 kubelet[2603]: I0513 23:59:12.671748 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" path="/var/lib/kubelet/pods/7896f9a0-04dc-4bdf-9417-b5b711ff4829/volumes" May 13 23:59:12.672628 kubelet[2603]: I0513 23:59:12.672596 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac50abef-6ad4-40fb-9ac8-24f745cd4755" path="/var/lib/kubelet/pods/ac50abef-6ad4-40fb-9ac8-24f745cd4755/volumes" May 13 23:59:13.019369 sshd[4310]: Connection closed by 10.0.0.1 port 54328 May 13 23:59:13.019942 sshd-session[4307]: pam_unix(sshd:session): session closed for user core May 13 23:59:13.030539 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:54328.service: Deactivated successfully. May 13 23:59:13.032462 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:59:13.033924 systemd-logind[1499]: Session 26 logged out. Waiting for processes to exit. May 13 23:59:13.042134 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:54332.service - OpenSSH per-connection server daemon (10.0.0.1:54332). May 13 23:59:13.043291 systemd-logind[1499]: Removed session 26. May 13 23:59:13.085004 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 54332 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:13.086398 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:13.090607 systemd-logind[1499]: New session 27 of user core. May 13 23:59:13.103846 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:59:13.600830 sshd[4477]: Connection closed by 10.0.0.1 port 54332 May 13 23:59:13.601449 sshd-session[4474]: pam_unix(sshd:session): session closed for user core May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616125 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="mount-cgroup" May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616154 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="apply-sysctl-overwrites" May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616160 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="mount-bpf-fs" May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616167 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="clean-cilium-state" May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616173 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="cilium-agent" May 13 23:59:13.618051 kubelet[2603]: E0513 23:59:13.616181 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac50abef-6ad4-40fb-9ac8-24f745cd4755" containerName="cilium-operator" May 13 23:59:13.618051 kubelet[2603]: I0513 23:59:13.616203 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="7896f9a0-04dc-4bdf-9417-b5b711ff4829" containerName="cilium-agent" May 13 23:59:13.618051 kubelet[2603]: I0513 23:59:13.616209 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50abef-6ad4-40fb-9ac8-24f745cd4755" containerName="cilium-operator" May 13 23:59:13.616288 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:54332.service: Deactivated successfully. May 13 23:59:13.623439 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:59:13.624922 systemd-logind[1499]: Session 27 logged out. Waiting for processes to exit. May 13 23:59:13.638211 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:43942.service - OpenSSH per-connection server daemon (10.0.0.1:43942). May 13 23:59:13.641560 systemd-logind[1499]: Removed session 27. May 13 23:59:13.649185 systemd[1]: Created slice kubepods-burstable-pod97c2b7ce_55f1_4620_98bc_eef7a2d40309.slice - libcontainer container kubepods-burstable-pod97c2b7ce_55f1_4620_98bc_eef7a2d40309.slice. May 13 23:59:13.682668 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 43942 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:13.684096 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:13.688172 systemd-logind[1499]: New session 28 of user core. May 13 23:59:13.704860 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 23:59:13.755824 sshd[4491]: Connection closed by 10.0.0.1 port 43942 May 13 23:59:13.756325 sshd-session[4488]: pam_unix(sshd:session): session closed for user core May 13 23:59:13.760848 kubelet[2603]: I0513 23:59:13.760818 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-lib-modules\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.761948 kubelet[2603]: I0513 23:59:13.760855 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97c2b7ce-55f1-4620-98bc-eef7a2d40309-clustermesh-secrets\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.761948 kubelet[2603]: I0513 23:59:13.760876 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-etc-cni-netd\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.761948 kubelet[2603]: I0513 23:59:13.760893 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97c2b7ce-55f1-4620-98bc-eef7a2d40309-cilium-config-path\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.761948 kubelet[2603]: I0513 23:59:13.760908 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-host-proc-sys-net\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.761948 kubelet[2603]: I0513 23:59:13.760923 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5twt\" (UniqueName: \"kubernetes.io/projected/97c2b7ce-55f1-4620-98bc-eef7a2d40309-kube-api-access-h5twt\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.760940 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-cni-path\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.760960 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-cilium-cgroup\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.760980 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-host-proc-sys-kernel\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.761000 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/97c2b7ce-55f1-4620-98bc-eef7a2d40309-cilium-ipsec-secrets\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.761015 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-cilium-run\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762073 kubelet[2603]: I0513 23:59:13.761029 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-bpf-maps\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762197 kubelet[2603]: I0513 23:59:13.761042 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-xtables-lock\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762197 kubelet[2603]: I0513 23:59:13.761056 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97c2b7ce-55f1-4620-98bc-eef7a2d40309-hostproc\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.762197 kubelet[2603]: I0513 23:59:13.761070 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97c2b7ce-55f1-4620-98bc-eef7a2d40309-hubble-tls\") pod \"cilium-dph7h\" (UID: \"97c2b7ce-55f1-4620-98bc-eef7a2d40309\") " pod="kube-system/cilium-dph7h" May 13 23:59:13.765548 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:43942.service: Deactivated successfully. May 13 23:59:13.767499 systemd[1]: session-28.scope: Deactivated successfully. May 13 23:59:13.768298 systemd-logind[1499]: Session 28 logged out. Waiting for processes to exit. May 13 23:59:13.780010 systemd[1]: Started sshd@28-10.0.0.59:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). May 13 23:59:13.780807 systemd-logind[1499]: Removed session 28. May 13 23:59:13.818574 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:SDwj61HyZYiqQ53zfGSriPwcQ0Zeintr2ntpmEpbXvg May 13 23:59:13.820164 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:13.824755 systemd-logind[1499]: New session 29 of user core. May 13 23:59:13.831852 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 23:59:13.952987 kubelet[2603]: E0513 23:59:13.952853 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:13.953842 containerd[1517]: time="2025-05-13T23:59:13.953336883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dph7h,Uid:97c2b7ce-55f1-4620-98bc-eef7a2d40309,Namespace:kube-system,Attempt:0,}" May 13 23:59:13.974343 containerd[1517]: time="2025-05-13T23:59:13.974220922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:59:13.974343 containerd[1517]: time="2025-05-13T23:59:13.974304206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:59:13.974343 containerd[1517]: time="2025-05-13T23:59:13.974318903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:59:13.974520 containerd[1517]: time="2025-05-13T23:59:13.974401506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:59:13.999872 systemd[1]: Started cri-containerd-a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500.scope - libcontainer container a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500. May 13 23:59:14.021147 containerd[1517]: time="2025-05-13T23:59:14.021091686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dph7h,Uid:97c2b7ce-55f1-4620-98bc-eef7a2d40309,Namespace:kube-system,Attempt:0,} returns sandbox id \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\"" May 13 23:59:14.021791 kubelet[2603]: E0513 23:59:14.021753 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:14.023903 containerd[1517]: time="2025-05-13T23:59:14.023841524Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:59:14.036836 containerd[1517]: time="2025-05-13T23:59:14.036790537Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015\"" May 13 23:59:14.037490 containerd[1517]: time="2025-05-13T23:59:14.037183454Z" level=info msg="StartContainer for \"8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015\"" May 13 23:59:14.068864 systemd[1]: Started cri-containerd-8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015.scope - libcontainer container 8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015. May 13 23:59:14.093863 containerd[1517]: time="2025-05-13T23:59:14.093816795Z" level=info msg="StartContainer for \"8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015\" returns successfully" May 13 23:59:14.104444 systemd[1]: cri-containerd-8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015.scope: Deactivated successfully. May 13 23:59:14.139027 containerd[1517]: time="2025-05-13T23:59:14.138955125Z" level=info msg="shim disconnected" id=8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015 namespace=k8s.io May 13 23:59:14.139027 containerd[1517]: time="2025-05-13T23:59:14.139017520Z" level=warning msg="cleaning up after shim disconnected" id=8347a54ab85c035b78ab63a0cd6fb0a5e73f3e67e3183b36e1f6a8b12057c015 namespace=k8s.io May 13 23:59:14.139027 containerd[1517]: time="2025-05-13T23:59:14.139026206Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:14.743100 kubelet[2603]: E0513 23:59:14.743060 2603 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:59:14.929476 kubelet[2603]: E0513 23:59:14.929413 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:14.932032 containerd[1517]: time="2025-05-13T23:59:14.931957710Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:59:14.947865 containerd[1517]: time="2025-05-13T23:59:14.947810987Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb\"" May 13 23:59:14.948344 containerd[1517]: time="2025-05-13T23:59:14.948309600Z" level=info msg="StartContainer for \"6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb\"" May 13 23:59:14.985882 systemd[1]: Started cri-containerd-6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb.scope - libcontainer container 6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb. May 13 23:59:15.011272 containerd[1517]: time="2025-05-13T23:59:15.011135578Z" level=info msg="StartContainer for \"6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb\" returns successfully" May 13 23:59:15.018567 systemd[1]: cri-containerd-6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb.scope: Deactivated successfully. May 13 23:59:15.055884 containerd[1517]: time="2025-05-13T23:59:15.055803512Z" level=info msg="shim disconnected" id=6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb namespace=k8s.io May 13 23:59:15.055884 containerd[1517]: time="2025-05-13T23:59:15.055866900Z" level=warning msg="cleaning up after shim disconnected" id=6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb namespace=k8s.io May 13 23:59:15.055884 containerd[1517]: time="2025-05-13T23:59:15.055878341Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:15.866834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6396524c42382ce0b919a9b4a281d69808927d090c636cf2a8f24d10ffc314fb-rootfs.mount: Deactivated successfully. May 13 23:59:15.932864 kubelet[2603]: E0513 23:59:15.932838 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:15.934454 containerd[1517]: time="2025-05-13T23:59:15.934390667Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:59:15.951807 containerd[1517]: time="2025-05-13T23:59:15.951762097Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4\"" May 13 23:59:15.952568 containerd[1517]: time="2025-05-13T23:59:15.952541259Z" level=info msg="StartContainer for \"7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4\"" May 13 23:59:15.982879 systemd[1]: Started cri-containerd-7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4.scope - libcontainer container 7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4. May 13 23:59:16.012772 containerd[1517]: time="2025-05-13T23:59:16.012648806Z" level=info msg="StartContainer for \"7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4\" returns successfully" May 13 23:59:16.014467 systemd[1]: cri-containerd-7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4.scope: Deactivated successfully. May 13 23:59:16.041300 containerd[1517]: time="2025-05-13T23:59:16.041237724Z" level=info msg="shim disconnected" id=7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4 namespace=k8s.io May 13 23:59:16.041590 containerd[1517]: time="2025-05-13T23:59:16.041557747Z" level=warning msg="cleaning up after shim disconnected" id=7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4 namespace=k8s.io May 13 23:59:16.041590 containerd[1517]: time="2025-05-13T23:59:16.041573827Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:16.739373 kubelet[2603]: I0513 23:59:16.739294 2603 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:59:16Z","lastTransitionTime":"2025-05-13T23:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:59:16.866902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e577727dcdb077e591b34b4a762c39aeb4b7ef41cd98dab8b4ded0398a117c4-rootfs.mount: Deactivated successfully. May 13 23:59:16.936169 kubelet[2603]: E0513 23:59:16.936107 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:16.939007 containerd[1517]: time="2025-05-13T23:59:16.938962653Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:59:16.963856 containerd[1517]: time="2025-05-13T23:59:16.963799262Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195\"" May 13 23:59:16.964454 containerd[1517]: time="2025-05-13T23:59:16.964373445Z" level=info msg="StartContainer for \"e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195\"" May 13 23:59:16.994862 systemd[1]: Started cri-containerd-e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195.scope - libcontainer container e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195. May 13 23:59:17.019633 systemd[1]: cri-containerd-e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195.scope: Deactivated successfully. May 13 23:59:17.021926 containerd[1517]: time="2025-05-13T23:59:17.021881460Z" level=info msg="StartContainer for \"e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195\" returns successfully" May 13 23:59:17.047451 containerd[1517]: time="2025-05-13T23:59:17.047386556Z" level=info msg="shim disconnected" id=e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195 namespace=k8s.io May 13 23:59:17.047451 containerd[1517]: time="2025-05-13T23:59:17.047443161Z" level=warning msg="cleaning up after shim disconnected" id=e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195 namespace=k8s.io May 13 23:59:17.047451 containerd[1517]: time="2025-05-13T23:59:17.047452368Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:17.867086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1d35ab0625f341ad527594c17938d8f3d2b4c7a9740af122d4ea6b995791195-rootfs.mount: Deactivated successfully. May 13 23:59:17.940504 kubelet[2603]: E0513 23:59:17.940476 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:17.942236 containerd[1517]: time="2025-05-13T23:59:17.942084740Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:59:17.958479 containerd[1517]: time="2025-05-13T23:59:17.958426478Z" level=info msg="CreateContainer within sandbox \"a008c4816dc7ca8d9779d3cadcf57d40bf2de50cae0ccc90b182da02cb845500\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c\"" May 13 23:59:17.959000 containerd[1517]: time="2025-05-13T23:59:17.958941633Z" level=info msg="StartContainer for \"a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c\"" May 13 23:59:17.991872 systemd[1]: Started cri-containerd-a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c.scope - libcontainer container a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c. May 13 23:59:18.021708 containerd[1517]: time="2025-05-13T23:59:18.021669039Z" level=info msg="StartContainer for \"a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c\" returns successfully" May 13 23:59:18.436750 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 23:59:18.944905 kubelet[2603]: E0513 23:59:18.944867 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:19.954531 kubelet[2603]: E0513 23:59:19.954267 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:20.671958 kubelet[2603]: E0513 23:59:20.671833 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:21.620709 systemd-networkd[1450]: lxc_health: Link UP May 13 23:59:21.631842 systemd-networkd[1450]: lxc_health: Gained carrier May 13 23:59:21.956000 kubelet[2603]: E0513 23:59:21.954591 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:21.972246 kubelet[2603]: I0513 23:59:21.971545 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dph7h" podStartSLOduration=8.971527392 podStartE2EDuration="8.971527392s" podCreationTimestamp="2025-05-13 23:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:18.959177254 +0000 UTC m=+94.372339623" watchObservedRunningTime="2025-05-13 23:59:21.971527392 +0000 UTC m=+97.384689741" May 13 23:59:22.669580 kubelet[2603]: E0513 23:59:22.669541 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:22.952864 kubelet[2603]: E0513 23:59:22.952649 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:23.307960 systemd-networkd[1450]: lxc_health: Gained IPv6LL May 13 23:59:23.953990 kubelet[2603]: E0513 23:59:23.953944 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:24.319177 systemd[1]: run-containerd-runc-k8s.io-a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c-runc.FUp75W.mount: Deactivated successfully. May 13 23:59:28.515300 systemd[1]: run-containerd-runc-k8s.io-a856b36069fcc5aa7e2426d8cdda062c93eddda8fdd09c954b47d6ae3533079c-runc.JbQUdx.mount: Deactivated successfully. May 13 23:59:28.573568 sshd[4501]: Connection closed by 10.0.0.1 port 43946 May 13 23:59:28.574459 sshd-session[4497]: pam_unix(sshd:session): session closed for user core May 13 23:59:28.579261 systemd[1]: sshd@28-10.0.0.59:22-10.0.0.1:43946.service: Deactivated successfully. May 13 23:59:28.581361 systemd[1]: session-29.scope: Deactivated successfully. May 13 23:59:28.582123 systemd-logind[1499]: Session 29 logged out. Waiting for processes to exit. May 13 23:59:28.583045 systemd-logind[1499]: Removed session 29.