Feb 13 19:41:17.899364 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:41:17.899390 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:17.899405 kernel: BIOS-provided physical RAM map: Feb 13 19:41:17.899413 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:41:17.899422 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:41:17.899430 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:41:17.899440 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:41:17.899449 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:41:17.899458 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:41:17.899469 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:41:17.899478 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:41:17.899486 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:41:17.899495 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:41:17.899504 kernel: NX (Execute Disable) protection: active Feb 13 19:41:17.899515 kernel: APIC: Static calls initialized Feb 13 19:41:17.899527 kernel: SMBIOS 2.8 present. Feb 13 19:41:17.899537 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:41:17.899546 kernel: Hypervisor detected: KVM Feb 13 19:41:17.899555 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:41:17.899564 kernel: kvm-clock: using sched offset of 2294014898 cycles Feb 13 19:41:17.899575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:41:17.899584 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:41:17.899594 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:41:17.899605 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:41:17.899616 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:41:17.899632 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:41:17.899641 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:41:17.899651 kernel: Using GB pages for direct mapping Feb 13 19:41:17.899661 kernel: ACPI: Early table checksum verification disabled Feb 13 19:41:17.899670 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:41:17.899680 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899690 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899700 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899712 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:41:17.899722 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899731 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899741 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899751 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:17.899760 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:41:17.899770 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:41:17.899784 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:41:17.899797 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:41:17.899807 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:41:17.899817 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:41:17.899827 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:41:17.899837 kernel: No NUMA configuration found Feb 13 19:41:17.899847 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:41:17.899860 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:41:17.899870 kernel: Zone ranges: Feb 13 19:41:17.899880 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:41:17.899890 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:41:17.899900 kernel: Normal empty Feb 13 19:41:17.899910 kernel: Movable zone start for each node Feb 13 19:41:17.899920 kernel: Early memory node ranges Feb 13 19:41:17.899930 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:41:17.899940 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:41:17.899950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:41:17.899963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:41:17.899973 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:41:17.899983 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:41:17.899993 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:41:17.900003 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:41:17.900013 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:41:17.900023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:41:17.900033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:41:17.900043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:41:17.900056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:41:17.900066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:41:17.900086 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:41:17.900096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:41:17.900105 kernel: TSC deadline timer available Feb 13 19:41:17.900115 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:41:17.900138 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:41:17.900149 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:41:17.900158 kernel: kvm-guest: setup PV sched yield Feb 13 19:41:17.900172 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:41:17.900182 kernel: Booting paravirtualized kernel on KVM Feb 13 19:41:17.900192 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:41:17.900202 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:41:17.900212 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:41:17.900222 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:41:17.900231 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:41:17.900241 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:41:17.900251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:41:17.900266 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:17.900276 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:41:17.900286 kernel: random: crng init done Feb 13 19:41:17.900295 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:41:17.900305 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:41:17.900315 kernel: Fallback order for Node 0: 0 Feb 13 19:41:17.900325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:41:17.900335 kernel: Policy zone: DMA32 Feb 13 19:41:17.900345 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:41:17.900360 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Feb 13 19:41:17.900369 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:41:17.900379 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:41:17.900390 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:41:17.900400 kernel: Dynamic Preempt: voluntary Feb 13 19:41:17.900410 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:41:17.900421 kernel: rcu: RCU event tracing is enabled. Feb 13 19:41:17.900432 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:41:17.900442 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:41:17.900456 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:41:17.900466 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:41:17.900476 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:41:17.900487 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:41:17.900497 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:41:17.900507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:41:17.900518 kernel: Console: colour VGA+ 80x25 Feb 13 19:41:17.900528 kernel: printk: console [ttyS0] enabled Feb 13 19:41:17.900538 kernel: ACPI: Core revision 20230628 Feb 13 19:41:17.900552 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:41:17.900563 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:41:17.900573 kernel: x2apic enabled Feb 13 19:41:17.900583 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:41:17.900594 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:41:17.900604 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:41:17.900615 kernel: kvm-guest: setup PV IPIs Feb 13 19:41:17.900638 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:41:17.900648 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:41:17.900659 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:41:17.900670 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:41:17.900680 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:41:17.900694 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:41:17.900706 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:41:17.900716 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:41:17.900727 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:41:17.900741 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:41:17.900752 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:41:17.900763 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:41:17.900774 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:41:17.900785 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:41:17.900795 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:41:17.900807 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:41:17.900818 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:41:17.900829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:41:17.900842 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:41:17.900853 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:41:17.900864 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:41:17.900874 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:41:17.900885 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:41:17.900896 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:41:17.900906 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:41:17.900917 kernel: landlock: Up and running. Feb 13 19:41:17.900927 kernel: SELinux: Initializing. Feb 13 19:41:17.900941 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:17.900952 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:17.900963 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:41:17.900974 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:17.900985 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:17.900995 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:17.901006 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:41:17.901016 kernel: ... version: 0 Feb 13 19:41:17.901030 kernel: ... bit width: 48 Feb 13 19:41:17.901041 kernel: ... generic registers: 6 Feb 13 19:41:17.901052 kernel: ... value mask: 0000ffffffffffff Feb 13 19:41:17.901063 kernel: ... max period: 00007fffffffffff Feb 13 19:41:17.901083 kernel: ... fixed-purpose events: 0 Feb 13 19:41:17.901093 kernel: ... event mask: 000000000000003f Feb 13 19:41:17.901104 kernel: signal: max sigframe size: 1776 Feb 13 19:41:17.901115 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:41:17.901138 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:41:17.901149 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:41:17.901164 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:41:17.901174 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:41:17.901185 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:41:17.901196 kernel: smpboot: Max logical packages: 1 Feb 13 19:41:17.901206 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:41:17.901217 kernel: devtmpfs: initialized Feb 13 19:41:17.901227 kernel: x86/mm: Memory block size: 128MB Feb 13 19:41:17.901238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:41:17.901249 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:41:17.901263 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:41:17.901273 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:41:17.901284 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:41:17.901295 kernel: audit: type=2000 audit(1739475677.799:1): state=initialized audit_enabled=0 res=1 Feb 13 19:41:17.901305 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:41:17.901316 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:41:17.901327 kernel: cpuidle: using governor menu Feb 13 19:41:17.901337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:41:17.901348 kernel: dca service started, version 1.12.1 Feb 13 19:41:17.901362 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:41:17.901373 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:41:17.901383 kernel: PCI: Using configuration type 1 for base access Feb 13 19:41:17.901394 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:41:17.901405 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:41:17.901416 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:41:17.901427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:41:17.901437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:41:17.901448 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:41:17.901461 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:41:17.901472 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:41:17.901483 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:41:17.901493 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:41:17.901504 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:41:17.901515 kernel: ACPI: Interpreter enabled Feb 13 19:41:17.901525 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:41:17.901536 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:41:17.901546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:41:17.901560 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:41:17.901571 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:41:17.901582 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:41:17.901801 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:41:17.901963 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:41:17.902148 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:41:17.902164 kernel: PCI host bridge to bus 0000:00 Feb 13 19:41:17.902333 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:41:17.902479 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:41:17.902624 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:41:17.902767 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:41:17.902906 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:41:17.903047 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:41:17.903234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:41:17.903426 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:41:17.903595 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:41:17.903751 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:41:17.903905 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:41:17.904058 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:41:17.904243 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:41:17.904415 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:41:17.904571 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:41:17.904725 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:41:17.904880 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:41:17.905051 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:41:17.905237 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:41:17.905394 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:41:17.905554 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:41:17.905726 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:41:17.905882 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:41:17.906037 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:41:17.906215 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:41:17.906338 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:41:17.906466 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:41:17.906592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:41:17.906724 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:41:17.906844 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:41:17.906962 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:41:17.907102 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:41:17.907240 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:41:17.907252 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:41:17.907264 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:41:17.907272 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:41:17.907280 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:41:17.907288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:41:17.907296 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:41:17.907304 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:41:17.907312 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:41:17.907320 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:41:17.907330 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:41:17.907338 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:41:17.907346 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:41:17.907354 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:41:17.907362 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:41:17.907370 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:41:17.907377 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:41:17.907385 kernel: iommu: Default domain type: Translated Feb 13 19:41:17.907393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:41:17.907401 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:41:17.907411 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:41:17.907419 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:41:17.907427 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:41:17.907549 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:41:17.907676 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:41:17.907795 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:41:17.907805 kernel: vgaarb: loaded Feb 13 19:41:17.907813 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:41:17.907825 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:41:17.907833 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:41:17.907840 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:41:17.907848 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:41:17.907856 kernel: pnp: PnP ACPI init Feb 13 19:41:17.907983 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:41:17.907994 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:41:17.908002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:41:17.908013 kernel: NET: Registered PF_INET protocol family Feb 13 19:41:17.908021 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:41:17.908029 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:41:17.908037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:41:17.908045 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:41:17.908053 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:41:17.908061 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:41:17.908077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:41:17.908085 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:41:17.908096 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:41:17.908104 kernel: NET: Registered PF_XDP protocol family Feb 13 19:41:17.908235 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:41:17.908346 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:41:17.908456 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:41:17.908566 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:41:17.908675 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:41:17.908785 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:41:17.908799 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:41:17.908807 kernel: Initialise system trusted keyrings Feb 13 19:41:17.908815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:41:17.908823 kernel: Key type asymmetric registered Feb 13 19:41:17.908831 kernel: Asymmetric key parser 'x509' registered Feb 13 19:41:17.908839 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:41:17.908847 kernel: io scheduler mq-deadline registered Feb 13 19:41:17.908855 kernel: io scheduler kyber registered Feb 13 19:41:17.908862 kernel: io scheduler bfq registered Feb 13 19:41:17.908873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:41:17.908881 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:41:17.908889 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:41:17.908897 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:41:17.908905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:41:17.908913 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:41:17.908921 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:41:17.908929 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:41:17.908936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:41:17.909076 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:41:17.909088 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 19:41:17.909278 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:41:17.909391 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:41:17 UTC (1739475677) Feb 13 19:41:17.909501 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:41:17.909511 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:41:17.909520 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:41:17.909527 kernel: Segment Routing with IPv6 Feb 13 19:41:17.909540 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:41:17.909548 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:41:17.909556 kernel: Key type dns_resolver registered Feb 13 19:41:17.909564 kernel: IPI shorthand broadcast: enabled Feb 13 19:41:17.909572 kernel: sched_clock: Marking stable (565002381, 108173115)->(724723916, -51548420) Feb 13 19:41:17.909579 kernel: registered taskstats version 1 Feb 13 19:41:17.909587 kernel: Loading compiled-in X.509 certificates Feb 13 19:41:17.909595 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:41:17.909603 kernel: Key type .fscrypt registered Feb 13 19:41:17.909614 kernel: Key type fscrypt-provisioning registered Feb 13 19:41:17.909622 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:41:17.909630 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:41:17.909637 kernel: ima: No architecture policies found Feb 13 19:41:17.909645 kernel: clk: Disabling unused clocks Feb 13 19:41:17.909653 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:41:17.909661 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:41:17.909669 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:41:17.909686 kernel: Run /init as init process Feb 13 19:41:17.909705 kernel: with arguments: Feb 13 19:41:17.909720 kernel: /init Feb 13 19:41:17.909728 kernel: with environment: Feb 13 19:41:17.909736 kernel: HOME=/ Feb 13 19:41:17.909744 kernel: TERM=linux Feb 13 19:41:17.909751 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:41:17.909761 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:41:17.909772 systemd[1]: Detected virtualization kvm. Feb 13 19:41:17.909783 systemd[1]: Detected architecture x86-64. Feb 13 19:41:17.909791 systemd[1]: Running in initrd. Feb 13 19:41:17.909800 systemd[1]: No hostname configured, using default hostname. Feb 13 19:41:17.909808 systemd[1]: Hostname set to . Feb 13 19:41:17.909816 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:41:17.909825 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:41:17.909833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:17.909842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:17.909854 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:41:17.909874 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:41:17.909885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:41:17.909894 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:41:17.909904 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:41:17.909916 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:41:17.909924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:17.909933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:17.909942 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:41:17.909950 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:41:17.909959 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:41:17.909967 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:41:17.909976 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:41:17.909989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:41:17.909998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:41:17.910006 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:41:17.910015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:17.910024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:17.910033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:17.910041 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:41:17.910050 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:41:17.910061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:41:17.910078 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:41:17.910087 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:41:17.910096 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:41:17.910104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:41:17.910113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:17.910121 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:41:17.910142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:17.910151 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:41:17.910164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:41:17.910192 systemd-journald[195]: Collecting audit messages is disabled. Feb 13 19:41:17.910215 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:41:17.910224 systemd-journald[195]: Journal started Feb 13 19:41:17.910245 systemd-journald[195]: Runtime Journal (/run/log/journal/74e82e1c4b9e4a078f9e509656f39483) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:41:17.895903 systemd-modules-load[196]: Inserted module 'overlay' Feb 13 19:41:17.938478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:41:17.938498 kernel: Bridge firewalling registered Feb 13 19:41:17.938509 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:41:17.922535 systemd-modules-load[196]: Inserted module 'br_netfilter' Feb 13 19:41:17.938687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:17.940711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:17.947281 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:17.947958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:17.950378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:41:17.953275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:41:17.961955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:17.962600 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:17.965416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:17.967448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:41:17.978111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:17.980226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:41:17.995255 dracut-cmdline[230]: dracut-dracut-053 Feb 13 19:41:17.997872 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:18.002782 systemd-resolved[224]: Positive Trust Anchors: Feb 13 19:41:18.002790 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:41:18.002826 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:41:18.005218 systemd-resolved[224]: Defaulting to hostname 'linux'. Feb 13 19:41:18.006252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:41:18.013725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:18.070153 kernel: SCSI subsystem initialized Feb 13 19:41:18.079144 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:41:18.089148 kernel: iscsi: registered transport (tcp) Feb 13 19:41:18.110152 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:41:18.110175 kernel: QLogic iSCSI HBA Driver Feb 13 19:41:18.150721 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:41:18.158280 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:41:18.183702 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:41:18.183771 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:41:18.183783 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:41:18.223151 kernel: raid6: avx2x4 gen() 30779 MB/s Feb 13 19:41:18.240146 kernel: raid6: avx2x2 gen() 31521 MB/s Feb 13 19:41:18.257224 kernel: raid6: avx2x1 gen() 25775 MB/s Feb 13 19:41:18.257243 kernel: raid6: using algorithm avx2x2 gen() 31521 MB/s Feb 13 19:41:18.275222 kernel: raid6: .... xor() 19873 MB/s, rmw enabled Feb 13 19:41:18.275257 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:41:18.295163 kernel: xor: automatically using best checksumming function avx Feb 13 19:41:18.441157 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:41:18.455162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:41:18.472339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:18.483816 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 19:41:18.488459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:18.490070 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:41:18.509392 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Feb 13 19:41:18.545231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:41:18.554334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:41:18.615428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:18.630378 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:41:18.645884 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:41:18.663924 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:41:18.664099 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:41:18.664112 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:41:18.664123 kernel: GPT:9289727 != 19775487 Feb 13 19:41:18.664154 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:41:18.664165 kernel: GPT:9289727 != 19775487 Feb 13 19:41:18.664175 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:41:18.664186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:18.643812 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:41:18.647258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:41:18.649259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:18.650773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:41:18.662488 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:41:18.683476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:41:18.689093 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:41:18.689120 kernel: AES CTR mode by8 optimization enabled Feb 13 19:41:18.699166 kernel: libata version 3.00 loaded. Feb 13 19:41:18.699638 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:41:18.699794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:18.704276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:18.707699 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Feb 13 19:41:18.710362 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473) Feb 13 19:41:18.710035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:41:18.710280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:18.711569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:18.722438 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:41:18.739769 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:41:18.739786 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:41:18.739955 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:41:18.740122 kernel: scsi host0: ahci Feb 13 19:41:18.740303 kernel: scsi host1: ahci Feb 13 19:41:18.740443 kernel: scsi host2: ahci Feb 13 19:41:18.740770 kernel: scsi host3: ahci Feb 13 19:41:18.740961 kernel: scsi host4: ahci Feb 13 19:41:18.741210 kernel: scsi host5: ahci Feb 13 19:41:18.741384 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:41:18.741399 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:41:18.741411 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:41:18.741422 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:41:18.741432 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:41:18.741442 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:41:18.724415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:18.745177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:41:18.781374 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:41:18.784047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:18.790331 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:41:18.790498 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:41:18.799688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:41:18.816381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:41:18.820364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:18.829183 disk-uuid[564]: Primary Header is updated. Feb 13 19:41:18.829183 disk-uuid[564]: Secondary Entries is updated. Feb 13 19:41:18.829183 disk-uuid[564]: Secondary Header is updated. Feb 13 19:41:18.832164 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:18.838167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:18.865715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:19.050172 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:19.050256 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:19.050287 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:19.051161 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:19.052162 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:41:19.053162 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:19.054156 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:41:19.054174 kernel: ata3.00: applying bridge limits Feb 13 19:41:19.055361 kernel: ata3.00: configured for UDMA/100 Feb 13 19:41:19.056159 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:41:19.096171 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:41:19.109819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:41:19.109832 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:41:19.838189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:19.838415 disk-uuid[565]: The operation has completed successfully. Feb 13 19:41:19.866113 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:41:19.866277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:41:19.894258 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:41:19.898210 sh[590]: Success Feb 13 19:41:19.910150 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:41:19.942871 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:41:19.951517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:41:19.954061 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:41:19.968186 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:41:19.968218 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:19.968229 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:41:19.970467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:41:19.970493 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:41:19.974154 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:41:19.974829 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:41:19.982253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:41:19.984519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:41:19.992620 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:19.992654 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:19.992668 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:19.995163 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:20.003018 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:41:20.004729 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:20.052386 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:41:20.063250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:41:20.090572 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:41:20.101502 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:41:20.116849 ignition[734]: Ignition 2.20.0 Feb 13 19:41:20.116862 ignition[734]: Stage: fetch-offline Feb 13 19:41:20.116898 ignition[734]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:20.116908 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:20.116994 ignition[734]: parsed url from cmdline: "" Feb 13 19:41:20.116998 ignition[734]: no config URL provided Feb 13 19:41:20.117003 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:41:20.117012 ignition[734]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:41:20.117052 ignition[734]: op(1): [started] loading QEMU firmware config module Feb 13 19:41:20.117057 ignition[734]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:41:20.123563 ignition[734]: op(1): [finished] loading QEMU firmware config module Feb 13 19:41:20.134620 systemd-networkd[771]: lo: Link UP Feb 13 19:41:20.134629 systemd-networkd[771]: lo: Gained carrier Feb 13 19:41:20.137837 systemd-networkd[771]: Enumeration completed Feb 13 19:41:20.138758 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:41:20.140253 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:20.140257 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:41:20.141360 systemd-networkd[771]: eth0: Link UP Feb 13 19:41:20.141365 systemd-networkd[771]: eth0: Gained carrier Feb 13 19:41:20.141373 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:20.144646 systemd[1]: Reached target network.target - Network. Feb 13 19:41:20.151174 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:41:20.174386 ignition[734]: parsing config with SHA512: f726919ec09422cb743ed6ec45b1eac267e62ab178c0eee21dd2efb4eb4b7119d8192b3d70358643067d4104e08580b428307466770f654c469b5604f36c7b9f Feb 13 19:41:20.179304 unknown[734]: fetched base config from "system" Feb 13 19:41:20.179316 unknown[734]: fetched user config from "qemu" Feb 13 19:41:20.180052 ignition[734]: fetch-offline: fetch-offline passed Feb 13 19:41:20.180230 ignition[734]: Ignition finished successfully Feb 13 19:41:20.182720 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:41:20.185508 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:41:20.204312 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:41:20.216597 ignition[783]: Ignition 2.20.0 Feb 13 19:41:20.216608 ignition[783]: Stage: kargs Feb 13 19:41:20.216758 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:20.216769 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:20.217565 ignition[783]: kargs: kargs passed Feb 13 19:41:20.217607 ignition[783]: Ignition finished successfully Feb 13 19:41:20.220802 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:41:20.230291 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:41:20.241216 ignition[791]: Ignition 2.20.0 Feb 13 19:41:20.241231 ignition[791]: Stage: disks Feb 13 19:41:20.241409 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:20.241422 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:20.245555 ignition[791]: disks: disks passed Feb 13 19:41:20.245612 ignition[791]: Ignition finished successfully Feb 13 19:41:20.248876 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:41:20.249207 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:41:20.250869 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:41:20.252971 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:41:20.255282 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:41:20.257155 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:41:20.273265 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:41:20.287008 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:41:20.293218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:41:20.306228 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:41:20.387219 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:41:20.387344 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:41:20.389436 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:41:20.400198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:41:20.402608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:41:20.404987 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:41:20.405040 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:41:20.414821 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (810) Feb 13 19:41:20.414841 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:20.414853 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:20.414863 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:20.414874 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:20.406847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:41:20.417246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:41:20.419059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:41:20.433310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:41:20.464138 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:41:20.468438 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:41:20.472609 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:41:20.475898 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:41:20.551392 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:41:20.565210 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:41:20.567938 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:41:20.573148 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:20.597113 ignition[923]: INFO : Ignition 2.20.0 Feb 13 19:41:20.597113 ignition[923]: INFO : Stage: mount Feb 13 19:41:20.599265 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:20.599265 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:20.601005 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:41:20.605886 ignition[923]: INFO : mount: mount passed Feb 13 19:41:20.606882 ignition[923]: INFO : Ignition finished successfully Feb 13 19:41:20.610735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:41:20.623244 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:41:20.967770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:41:20.978263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:41:20.985855 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) Feb 13 19:41:20.985887 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:20.985903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:20.987369 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:20.990152 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:20.991145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:41:21.025267 ignition[955]: INFO : Ignition 2.20.0 Feb 13 19:41:21.025267 ignition[955]: INFO : Stage: files Feb 13 19:41:21.027196 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:21.027196 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:21.027196 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:41:21.031070 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:41:21.031070 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:41:21.031070 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:41:21.031070 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:41:21.031070 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:41:21.030287 unknown[955]: wrote ssh authorized keys file for user: core Feb 13 19:41:21.039016 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:41:21.039016 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:41:21.070142 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:41:21.183675 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:41:21.183675 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:41:21.187402 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:41:21.434314 systemd-networkd[771]: eth0: Gained IPv6LL Feb 13 19:41:21.725081 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:41:21.822719 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:41:21.824561 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:41:21.826258 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:41:21.827932 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:41:21.829701 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:41:21.831567 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:41:21.833299 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:41:21.834968 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:41:21.836658 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:41:21.838508 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:41:21.840330 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:41:21.842078 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:41:21.844878 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:41:21.847461 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:41:21.849526 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:41:22.165187 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:41:22.457568 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:41:22.457568 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:41:22.461206 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:41:22.463390 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:41:22.463390 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:41:22.463390 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:41:22.467671 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:41:22.469620 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:41:22.469620 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:41:22.469620 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:41:22.493095 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:41:22.499947 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:41:22.501660 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:41:22.501660 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:41:22.501660 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:41:22.501660 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:41:22.501660 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:41:22.501660 ignition[955]: INFO : files: files passed Feb 13 19:41:22.501660 ignition[955]: INFO : Ignition finished successfully Feb 13 19:41:22.509837 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:41:22.526308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:41:22.529362 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:41:22.533977 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:41:22.535142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:41:22.537881 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:41:22.540496 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:22.540496 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:22.544062 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:22.546579 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:41:22.548166 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:41:22.558262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:41:22.581560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:41:22.581683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:41:22.584236 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:41:22.585407 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:41:22.585801 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:41:22.586612 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:41:22.606992 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:41:22.615274 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:41:22.626562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:22.626752 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:22.627110 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:41:22.663116 ignition[1009]: INFO : Ignition 2.20.0 Feb 13 19:41:22.663116 ignition[1009]: INFO : Stage: umount Feb 13 19:41:22.663116 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:22.663116 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:22.663116 ignition[1009]: INFO : umount: umount passed Feb 13 19:41:22.663116 ignition[1009]: INFO : Ignition finished successfully Feb 13 19:41:22.627443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:41:22.627567 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:41:22.628265 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:41:22.628590 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:41:22.628915 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:41:22.629425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:41:22.629755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:41:22.630097 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:41:22.630428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:41:22.630765 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:41:22.631110 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:41:22.631451 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:41:22.631743 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:41:22.631872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:41:22.632745 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:22.633101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:22.633387 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:41:22.633520 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:22.633907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:41:22.634073 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:41:22.634572 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:41:22.634694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:41:22.635191 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:41:22.635553 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:41:22.637185 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:22.637556 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:41:22.637868 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:41:22.638224 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:41:22.638312 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:41:22.638763 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:41:22.638870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:41:22.639281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:41:22.639417 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:41:22.639749 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:41:22.639880 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:41:22.641190 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:41:22.641449 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:41:22.641561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:22.642750 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:41:22.643101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:41:22.643265 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:22.643700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:41:22.643836 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:41:22.648403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:41:22.648537 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:41:22.663273 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:41:22.663433 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:41:22.665339 systemd[1]: Stopped target network.target - Network. Feb 13 19:41:22.665952 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:41:22.666032 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:41:22.669747 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:41:22.669796 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:41:22.670563 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:41:22.670609 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:41:22.673428 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:41:22.673476 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:41:22.675522 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:41:22.676849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:41:22.680494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:41:22.683162 systemd-networkd[771]: eth0: DHCPv6 lease lost Feb 13 19:41:22.684358 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:41:22.684487 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:41:22.686289 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:41:22.686413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:41:22.689852 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:41:22.689901 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:22.696425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:41:22.697855 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:41:22.697917 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:41:22.698407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:41:22.698451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:22.698760 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:41:22.698802 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:22.699093 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:41:22.699147 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:22.699672 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:22.713557 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:41:22.713698 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:41:22.720252 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:41:22.720442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:22.720922 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:41:22.720989 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:22.724039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:41:22.724092 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:22.725006 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:41:22.725069 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:41:22.725861 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:41:22.725911 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:41:22.731373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:41:22.731435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:22.740316 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:41:22.741634 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:41:22.741714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:22.742019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:41:22.742075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:22.748623 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:41:22.748751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:41:22.823578 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:41:22.823717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:41:22.825824 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:41:22.827588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:41:22.827640 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:41:22.837257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:41:22.846184 systemd[1]: Switching root. Feb 13 19:41:22.887903 systemd-journald[195]: Journal stopped Feb 13 19:41:24.193110 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Feb 13 19:41:24.193200 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:41:24.193220 kernel: SELinux: policy capability open_perms=1 Feb 13 19:41:24.193232 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:41:24.193244 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:41:24.193255 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:41:24.193270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:41:24.193281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:41:24.193292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:41:24.193304 kernel: audit: type=1403 audit(1739475683.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:41:24.193316 systemd[1]: Successfully loaded SELinux policy in 38.622ms. Feb 13 19:41:24.193342 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.877ms. Feb 13 19:41:24.193355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:41:24.193368 systemd[1]: Detected virtualization kvm. Feb 13 19:41:24.193380 systemd[1]: Detected architecture x86-64. Feb 13 19:41:24.193394 systemd[1]: Detected first boot. Feb 13 19:41:24.193413 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:41:24.193425 zram_generator::config[1054]: No configuration found. Feb 13 19:41:24.193438 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:41:24.193451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:41:24.193465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:41:24.193477 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:41:24.193492 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:41:24.193509 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:41:24.193521 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:41:24.193533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:41:24.193545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:41:24.193558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:41:24.193570 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:41:24.193582 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:41:24.193594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:24.193606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:24.193621 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:41:24.193633 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:41:24.193645 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:41:24.193657 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:41:24.193669 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:41:24.193687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:24.193699 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:41:24.193711 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:41:24.193723 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:41:24.193739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:41:24.193750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:24.193762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:41:24.193775 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:41:24.193787 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:41:24.193800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:41:24.193811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:41:24.193823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:24.193838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:24.193850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:24.193862 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:41:24.193880 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:41:24.193892 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:41:24.193904 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:41:24.193916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:24.193934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:41:24.193947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:41:24.193962 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:41:24.193979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:41:24.193996 systemd[1]: Reached target machines.target - Containers. Feb 13 19:41:24.194011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:41:24.194025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:24.194037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:41:24.194049 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:41:24.194061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:24.194077 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:41:24.194090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:24.194102 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:41:24.194114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:24.194267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:41:24.194281 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:41:24.194293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:41:24.194305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:41:24.194320 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:41:24.194340 kernel: loop: module loaded Feb 13 19:41:24.194353 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:41:24.194366 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:41:24.194378 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:41:24.194389 kernel: fuse: init (API version 7.39) Feb 13 19:41:24.194401 kernel: ACPI: bus type drm_connector registered Feb 13 19:41:24.194413 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:41:24.194425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:41:24.194437 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:41:24.194457 systemd[1]: Stopped verity-setup.service. Feb 13 19:41:24.194494 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 19:41:24.194519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:24.194531 systemd-journald[1124]: Journal started Feb 13 19:41:24.194552 systemd-journald[1124]: Runtime Journal (/run/log/journal/74e82e1c4b9e4a078f9e509656f39483) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:41:23.958923 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:41:23.983170 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:41:23.983609 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:41:24.198023 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:41:24.198841 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:41:24.200102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:41:24.201371 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:41:24.202511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:41:24.203772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:41:24.205024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:41:24.206332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:41:24.207801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:24.209445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:41:24.209622 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:41:24.211219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:24.211389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:24.212881 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:41:24.213060 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:41:24.214470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:24.214642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:24.216208 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:41:24.216378 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:41:24.217988 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:24.218181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:24.219607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:24.221062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:41:24.222627 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:41:24.240232 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:41:24.249344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:41:24.251949 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:41:24.253105 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:41:24.253158 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:41:24.255227 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:41:24.261307 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:41:24.263680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:41:24.265051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:24.267626 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:41:24.270902 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:41:24.273463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:41:24.275333 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:41:24.276479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:41:24.280541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:24.286593 systemd-journald[1124]: Time spent on flushing to /var/log/journal/74e82e1c4b9e4a078f9e509656f39483 is 22.633ms for 952 entries. Feb 13 19:41:24.286593 systemd-journald[1124]: System Journal (/var/log/journal/74e82e1c4b9e4a078f9e509656f39483) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:41:24.315491 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 19:41:24.315525 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:41:24.284307 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:41:24.289028 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:41:24.293465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:24.295566 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:41:24.297273 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:41:24.299033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:41:24.301921 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:41:24.313125 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:41:24.326462 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:41:24.330195 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:41:24.332286 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:41:24.334517 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:24.345785 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:41:24.347882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:41:24.350152 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:41:24.350860 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:41:24.360311 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:41:24.368432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:41:24.371202 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 19:41:24.391014 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:41:24.391033 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:41:24.397510 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:24.399149 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 19:41:24.442159 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 19:41:24.455181 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 19:41:24.468153 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 19:41:24.476188 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:41:24.476802 (sd-merge)[1192]: Merged extensions into '/usr'. Feb 13 19:41:24.481224 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:41:24.481240 systemd[1]: Reloading... Feb 13 19:41:24.553153 zram_generator::config[1224]: No configuration found. Feb 13 19:41:24.615990 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:41:24.671435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:41:24.721087 systemd[1]: Reloading finished in 239 ms. Feb 13 19:41:24.766941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:41:24.768473 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:41:24.781272 systemd[1]: Starting ensure-sysext.service... Feb 13 19:41:24.783246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:41:24.792755 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:41:24.792774 systemd[1]: Reloading... Feb 13 19:41:24.814068 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:41:24.814441 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:41:24.815561 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:41:24.815959 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:41:24.816064 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:41:24.820757 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:41:24.820773 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:41:24.835575 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:41:24.835880 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:41:24.846188 zram_generator::config[1283]: No configuration found. Feb 13 19:41:24.972563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:41:25.021322 systemd[1]: Reloading finished in 228 ms. Feb 13 19:41:25.041644 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:41:25.054842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:25.066872 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:41:25.069522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:41:25.071982 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:41:25.077426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:41:25.081377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:25.084423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:41:25.090263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.090500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:25.092076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:25.098937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:25.102228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:25.103395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:25.109203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:41:25.110409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.111602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:41:25.113571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:25.113765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:25.115472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:25.115635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:25.117464 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:25.117814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:25.119739 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Feb 13 19:41:25.129225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:41:25.129462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:41:25.135237 augenrules[1356]: No rules Feb 13 19:41:25.140506 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:41:25.142515 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:41:25.142756 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:41:25.148346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:25.150828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.151052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:25.154470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:25.157317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:25.161360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:25.162588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:25.167332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:41:25.168487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.169397 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:41:25.172267 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:41:25.174183 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:41:25.175879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:25.176706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:25.190559 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:41:25.205810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.212352 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:41:25.213548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:25.217928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:41:25.233055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:25.239509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1365) Feb 13 19:41:25.234540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:25.234610 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:41:25.234633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:25.235585 systemd[1]: Finished ensure-sysext.service. Feb 13 19:41:25.238496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:25.238661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:25.240392 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:25.240580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:25.242268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:25.242443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:25.244265 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:41:25.245330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:41:25.257667 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 19:41:25.257687 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:41:25.266333 augenrules[1396]: /sbin/augenrules: No change Feb 13 19:41:25.257719 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:41:25.263393 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:41:25.265599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:41:25.265642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:41:25.270567 systemd-resolved[1325]: Defaulting to hostname 'linux'. Feb 13 19:41:25.270856 augenrules[1425]: No rules Feb 13 19:41:25.277303 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:41:25.279180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:41:25.280789 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:41:25.281025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:41:25.286281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:41:25.303608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:41:25.311802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:25.320184 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:41:25.322316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:41:25.327542 systemd-networkd[1379]: lo: Link UP Feb 13 19:41:25.327874 systemd-networkd[1379]: lo: Gained carrier Feb 13 19:41:25.333161 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:41:25.341303 systemd-networkd[1379]: Enumeration completed Feb 13 19:41:25.341718 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:25.341730 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:41:25.342215 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:41:25.343124 systemd-networkd[1379]: eth0: Link UP Feb 13 19:41:25.343140 systemd-networkd[1379]: eth0: Gained carrier Feb 13 19:41:25.343153 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:25.343666 systemd[1]: Reached target network.target - Network. Feb 13 19:41:25.352330 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:41:25.362767 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:41:25.364404 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:41:25.365383 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:41:25.365818 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:41:25.367514 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Feb 13 19:41:26.268897 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:41:26.268946 systemd-timesyncd[1426]: Initial clock synchronization to Thu 2025-02-13 19:41:26.268730 UTC. Feb 13 19:41:26.269586 systemd-resolved[1325]: Clock change detected. Flushing caches. Feb 13 19:41:26.339529 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:41:26.343003 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:41:26.343033 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:41:26.343624 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:41:26.340784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:26.355946 kernel: kvm_amd: TSC scaling supported Feb 13 19:41:26.355977 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:41:26.355990 kernel: kvm_amd: Nested Paging enabled Feb 13 19:41:26.356920 kernel: kvm_amd: LBR virtualization supported Feb 13 19:41:26.356935 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:41:26.356958 kernel: kvm_amd: Virtual GIF supported Feb 13 19:41:26.379465 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:41:26.424151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:41:26.434010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:26.448599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:41:26.456915 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:41:26.490273 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:41:26.492145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:26.493536 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:41:26.494966 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:41:26.496773 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:41:26.498489 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:41:26.499943 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:41:26.501378 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:41:26.502948 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:41:26.502998 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:41:26.504015 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:41:26.505988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:41:26.508956 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:41:26.520225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:41:26.522912 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:41:26.524617 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:41:26.525783 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:41:26.526760 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:41:26.527743 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:41:26.527779 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:41:26.528850 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:41:26.531026 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:41:26.533511 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:41:26.536620 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:41:26.539384 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:41:26.540540 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:41:26.542000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:41:26.543891 jq[1456]: false Feb 13 19:41:26.547522 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:41:26.550652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:41:26.558555 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:41:26.562529 extend-filesystems[1457]: Found loop3 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found loop4 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found loop5 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found sr0 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda1 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda2 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda3 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found usr Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda4 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda6 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda7 Feb 13 19:41:26.562529 extend-filesystems[1457]: Found vda9 Feb 13 19:41:26.562529 extend-filesystems[1457]: Checking size of /dev/vda9 Feb 13 19:41:26.569807 dbus-daemon[1455]: [system] SELinux support is enabled Feb 13 19:41:26.564446 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:41:26.568501 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:41:26.569057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:41:26.570564 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:41:26.573593 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:41:26.577589 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:41:26.583860 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:41:26.586800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:41:26.586997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:41:26.587337 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:41:26.587687 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:41:26.590848 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:41:26.591035 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:41:26.594816 update_engine[1471]: I20250213 19:41:26.594729 1471 main.cc:92] Flatcar Update Engine starting Feb 13 19:41:26.596500 jq[1474]: true Feb 13 19:41:26.602515 extend-filesystems[1457]: Resized partition /dev/vda9 Feb 13 19:41:26.609568 update_engine[1471]: I20250213 19:41:26.609398 1471 update_check_scheduler.cc:74] Next update check in 8m38s Feb 13 19:41:26.610720 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:41:26.622516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1376) Feb 13 19:41:26.622556 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:41:26.611641 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:41:26.611684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:41:26.613032 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:41:26.613049 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:41:26.616968 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:41:26.618054 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:41:26.623178 jq[1485]: true Feb 13 19:41:26.623515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:41:26.633695 tar[1478]: linux-amd64/LICENSE Feb 13 19:41:26.633950 tar[1478]: linux-amd64/helm Feb 13 19:41:26.649454 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:41:26.649483 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:41:26.654445 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:41:26.655914 systemd-logind[1469]: New seat seat0. Feb 13 19:41:26.659366 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:41:26.687269 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:41:26.687269 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:41:26.687269 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:41:26.696240 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Feb 13 19:41:26.689735 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:41:26.689963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:41:26.704127 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:41:26.705646 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:41:26.708879 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:41:26.713097 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:41:26.819333 containerd[1486]: time="2025-02-13T19:41:26.819230097Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:41:26.845257 containerd[1486]: time="2025-02-13T19:41:26.845141320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.846915 containerd[1486]: time="2025-02-13T19:41:26.846878307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:26.846915 containerd[1486]: time="2025-02-13T19:41:26.846904496Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:41:26.846993 containerd[1486]: time="2025-02-13T19:41:26.846919735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:41:26.847129 containerd[1486]: time="2025-02-13T19:41:26.847099973Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:41:26.847129 containerd[1486]: time="2025-02-13T19:41:26.847121253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847219 containerd[1486]: time="2025-02-13T19:41:26.847196745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847219 containerd[1486]: time="2025-02-13T19:41:26.847213195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847436 containerd[1486]: time="2025-02-13T19:41:26.847404755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847460 containerd[1486]: time="2025-02-13T19:41:26.847439009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847460 containerd[1486]: time="2025-02-13T19:41:26.847451883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847507 containerd[1486]: time="2025-02-13T19:41:26.847460900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847574 containerd[1486]: time="2025-02-13T19:41:26.847553544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847810 containerd[1486]: time="2025-02-13T19:41:26.847781191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847918 containerd[1486]: time="2025-02-13T19:41:26.847897489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:26.847918 containerd[1486]: time="2025-02-13T19:41:26.847912747Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:41:26.848031 containerd[1486]: time="2025-02-13T19:41:26.848012304Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:41:26.848088 containerd[1486]: time="2025-02-13T19:41:26.848070152Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:41:26.852850 containerd[1486]: time="2025-02-13T19:41:26.852814169Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:41:26.852887 containerd[1486]: time="2025-02-13T19:41:26.852869784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:41:26.852934 containerd[1486]: time="2025-02-13T19:41:26.852886315Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:41:26.852934 containerd[1486]: time="2025-02-13T19:41:26.852901453Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:41:26.852934 containerd[1486]: time="2025-02-13T19:41:26.852914347Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:41:26.853078 containerd[1486]: time="2025-02-13T19:41:26.853039712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:41:26.853296 containerd[1486]: time="2025-02-13T19:41:26.853278129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:41:26.853403 containerd[1486]: time="2025-02-13T19:41:26.853381663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:41:26.853452 containerd[1486]: time="2025-02-13T19:41:26.853401190Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:41:26.853452 containerd[1486]: time="2025-02-13T19:41:26.853416449Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:41:26.853452 containerd[1486]: time="2025-02-13T19:41:26.853445744Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853507 containerd[1486]: time="2025-02-13T19:41:26.853458087Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853507 containerd[1486]: time="2025-02-13T19:41:26.853470931Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853507 containerd[1486]: time="2025-02-13T19:41:26.853483134Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853507 containerd[1486]: time="2025-02-13T19:41:26.853496158Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853507 containerd[1486]: time="2025-02-13T19:41:26.853507159Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853518871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853530392Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853549298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853561761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853572952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853584875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853602 containerd[1486]: time="2025-02-13T19:41:26.853602948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853616804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853628196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853641280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853654255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853668061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853679732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853691214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853702716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853716922Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:41:26.853730 containerd[1486]: time="2025-02-13T19:41:26.853735166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853748251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853758590Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853797072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853811409Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853821067Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853832830Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853841836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853858538Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853868286Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:41:26.853910 containerd[1486]: time="2025-02-13T19:41:26.853877924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:41:26.854191 containerd[1486]: time="2025-02-13T19:41:26.854128304Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:41:26.854191 containerd[1486]: time="2025-02-13T19:41:26.854171905Z" level=info msg="Connect containerd service" Feb 13 19:41:26.854347 containerd[1486]: time="2025-02-13T19:41:26.854218192Z" level=info msg="using legacy CRI server" Feb 13 19:41:26.854347 containerd[1486]: time="2025-02-13T19:41:26.854225987Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:41:26.854347 containerd[1486]: time="2025-02-13T19:41:26.854335392Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:41:26.854909 containerd[1486]: time="2025-02-13T19:41:26.854865907Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855168965Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855194934Z" level=info msg="Start subscribing containerd event" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855262932Z" level=info msg="Start recovering state" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855231312Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855352449Z" level=info msg="Start event monitor" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855364853Z" level=info msg="Start snapshots syncer" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855375593Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855382656Z" level=info msg="Start streaming server" Feb 13 19:41:26.855513 containerd[1486]: time="2025-02-13T19:41:26.855490348Z" level=info msg="containerd successfully booted in 0.037384s" Feb 13 19:41:26.855841 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:41:26.929383 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:41:26.952660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:41:26.960627 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:41:26.966900 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:41:26.967092 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:41:26.970604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:41:26.985196 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:41:26.987940 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:41:26.990308 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:41:26.991567 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:41:27.078674 tar[1478]: linux-amd64/README.md Feb 13 19:41:27.094280 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:41:27.582670 systemd-networkd[1379]: eth0: Gained IPv6LL Feb 13 19:41:27.586318 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:41:27.588149 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:41:27.599652 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:41:27.602105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:27.604317 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:41:27.626262 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:41:27.626533 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:41:27.628539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:41:27.629787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:41:29.010224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:29.012010 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:41:29.013392 systemd[1]: Startup finished in 705ms (kernel) + 5.741s (initrd) + 4.706s (userspace) = 11.152s. Feb 13 19:41:29.026044 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:29.030031 agetty[1542]: failed to open credentials directory Feb 13 19:41:29.037254 agetty[1541]: failed to open credentials directory Feb 13 19:41:29.719902 kubelet[1568]: E0213 19:41:29.719829 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:29.723803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:29.723992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:29.724306 systemd[1]: kubelet.service: Consumed 1.978s CPU time. Feb 13 19:41:36.386246 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:41:36.387518 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:33262.service - OpenSSH per-connection server daemon (10.0.0.1:33262). Feb 13 19:41:36.441082 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 33262 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:36.443120 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:36.451443 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:41:36.461641 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:41:36.463402 systemd-logind[1469]: New session 1 of user core. Feb 13 19:41:36.472207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:41:36.475041 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:41:36.482644 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:41:36.581557 systemd[1585]: Queued start job for default target default.target. Feb 13 19:41:36.590700 systemd[1585]: Created slice app.slice - User Application Slice. Feb 13 19:41:36.590726 systemd[1585]: Reached target paths.target - Paths. Feb 13 19:41:36.590741 systemd[1585]: Reached target timers.target - Timers. Feb 13 19:41:36.592322 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:41:36.605738 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:41:36.605882 systemd[1585]: Reached target sockets.target - Sockets. Feb 13 19:41:36.605903 systemd[1585]: Reached target basic.target - Basic System. Feb 13 19:41:36.605942 systemd[1585]: Reached target default.target - Main User Target. Feb 13 19:41:36.605990 systemd[1585]: Startup finished in 116ms. Feb 13 19:41:36.606440 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:41:36.607927 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:41:36.667845 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:33268.service - OpenSSH per-connection server daemon (10.0.0.1:33268). Feb 13 19:41:36.715705 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 33268 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:36.717067 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:36.720921 systemd-logind[1469]: New session 2 of user core. Feb 13 19:41:36.730558 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:41:36.784072 sshd[1598]: Connection closed by 10.0.0.1 port 33268 Feb 13 19:41:36.784523 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:36.796245 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:33268.service: Deactivated successfully. Feb 13 19:41:36.797992 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:41:36.799671 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:41:36.808812 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:33280.service - OpenSSH per-connection server daemon (10.0.0.1:33280). Feb 13 19:41:36.809739 systemd-logind[1469]: Removed session 2. Feb 13 19:41:36.843260 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 33280 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:36.844661 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:36.848455 systemd-logind[1469]: New session 3 of user core. Feb 13 19:41:36.858544 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:41:36.907530 sshd[1605]: Connection closed by 10.0.0.1 port 33280 Feb 13 19:41:36.908024 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:36.920119 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:33280.service: Deactivated successfully. Feb 13 19:41:36.922076 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:41:36.923692 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:41:36.924916 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:33284.service - OpenSSH per-connection server daemon (10.0.0.1:33284). Feb 13 19:41:36.925602 systemd-logind[1469]: Removed session 3. Feb 13 19:41:36.964647 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 33284 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:36.966314 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:36.970348 systemd-logind[1469]: New session 4 of user core. Feb 13 19:41:36.976537 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:41:37.029754 sshd[1612]: Connection closed by 10.0.0.1 port 33284 Feb 13 19:41:37.030171 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:37.041165 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:33284.service: Deactivated successfully. Feb 13 19:41:37.042914 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:41:37.044537 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:41:37.045748 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:33300.service - OpenSSH per-connection server daemon (10.0.0.1:33300). Feb 13 19:41:37.046531 systemd-logind[1469]: Removed session 4. Feb 13 19:41:37.084104 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 33300 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:37.085527 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:37.089276 systemd-logind[1469]: New session 5 of user core. Feb 13 19:41:37.101544 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:41:37.160450 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:41:37.160788 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:37.174754 sudo[1620]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:37.176163 sshd[1619]: Connection closed by 10.0.0.1 port 33300 Feb 13 19:41:37.176681 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:37.190291 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:33300.service: Deactivated successfully. Feb 13 19:41:37.192063 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:41:37.193734 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:41:37.202685 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:33306.service - OpenSSH per-connection server daemon (10.0.0.1:33306). Feb 13 19:41:37.203617 systemd-logind[1469]: Removed session 5. Feb 13 19:41:37.238216 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 33306 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:37.239766 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:37.243720 systemd-logind[1469]: New session 6 of user core. Feb 13 19:41:37.253582 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:41:37.307111 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:41:37.307467 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:37.311263 sudo[1629]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:37.317558 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:41:37.317887 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:37.336680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:41:37.365936 augenrules[1651]: No rules Feb 13 19:41:37.367738 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:41:37.367967 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:41:37.369166 sudo[1628]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:37.370635 sshd[1627]: Connection closed by 10.0.0.1 port 33306 Feb 13 19:41:37.370989 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:37.387323 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:33306.service: Deactivated successfully. Feb 13 19:41:37.389198 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:41:37.390844 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:41:37.392086 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:33310.service - OpenSSH per-connection server daemon (10.0.0.1:33310). Feb 13 19:41:37.392893 systemd-logind[1469]: Removed session 6. Feb 13 19:41:37.443458 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 33310 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:37.445103 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:37.449074 systemd-logind[1469]: New session 7 of user core. Feb 13 19:41:37.458558 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:41:37.511448 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:41:37.511802 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:38.103621 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:41:38.104791 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:41:38.710626 dockerd[1683]: time="2025-02-13T19:41:38.710532053Z" level=info msg="Starting up" Feb 13 19:41:38.860223 dockerd[1683]: time="2025-02-13T19:41:38.860172502Z" level=info msg="Loading containers: start." Feb 13 19:41:39.053451 kernel: Initializing XFRM netlink socket Feb 13 19:41:39.144392 systemd-networkd[1379]: docker0: Link UP Feb 13 19:41:39.187044 dockerd[1683]: time="2025-02-13T19:41:39.186984662Z" level=info msg="Loading containers: done." Feb 13 19:41:39.207829 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1416661311-merged.mount: Deactivated successfully. Feb 13 19:41:39.210063 dockerd[1683]: time="2025-02-13T19:41:39.210025471Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:41:39.210151 dockerd[1683]: time="2025-02-13T19:41:39.210139265Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:41:39.210266 dockerd[1683]: time="2025-02-13T19:41:39.210248650Z" level=info msg="Daemon has completed initialization" Feb 13 19:41:39.246126 dockerd[1683]: time="2025-02-13T19:41:39.246039981Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:41:39.246273 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:41:39.974386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:41:39.983653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:40.064996 containerd[1486]: time="2025-02-13T19:41:40.064956883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:41:40.185678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:40.190595 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:40.449067 kubelet[1887]: E0213 19:41:40.448949 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:40.455635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:40.455902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:41.382895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234123336.mount: Deactivated successfully. Feb 13 19:41:42.667541 containerd[1486]: time="2025-02-13T19:41:42.667485122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:42.668298 containerd[1486]: time="2025-02-13T19:41:42.668267610Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:41:42.669455 containerd[1486]: time="2025-02-13T19:41:42.669390696Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:42.672986 containerd[1486]: time="2025-02-13T19:41:42.672938269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:42.674049 containerd[1486]: time="2025-02-13T19:41:42.674000090Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.609003603s" Feb 13 19:41:42.674263 containerd[1486]: time="2025-02-13T19:41:42.674052108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:41:42.675238 containerd[1486]: time="2025-02-13T19:41:42.675206463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:41:44.048107 containerd[1486]: time="2025-02-13T19:41:44.048043652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:44.048934 containerd[1486]: time="2025-02-13T19:41:44.048869792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:41:44.050249 containerd[1486]: time="2025-02-13T19:41:44.050191761Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:44.052995 containerd[1486]: time="2025-02-13T19:41:44.052959391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:44.053953 containerd[1486]: time="2025-02-13T19:41:44.053917458Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.378554812s" Feb 13 19:41:44.053953 containerd[1486]: time="2025-02-13T19:41:44.053951391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:41:44.054476 containerd[1486]: time="2025-02-13T19:41:44.054396767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:41:45.319026 containerd[1486]: time="2025-02-13T19:41:45.318972217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.319860 containerd[1486]: time="2025-02-13T19:41:45.319794920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:41:45.320924 containerd[1486]: time="2025-02-13T19:41:45.320865317Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.326626 containerd[1486]: time="2025-02-13T19:41:45.326596425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.327627 containerd[1486]: time="2025-02-13T19:41:45.327601009Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.273176621s" Feb 13 19:41:45.327665 containerd[1486]: time="2025-02-13T19:41:45.327626226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:41:45.328095 containerd[1486]: time="2025-02-13T19:41:45.328067173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:41:46.635785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048061569.mount: Deactivated successfully. Feb 13 19:41:47.568672 containerd[1486]: time="2025-02-13T19:41:47.568593891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.569606 containerd[1486]: time="2025-02-13T19:41:47.569544433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:41:47.571100 containerd[1486]: time="2025-02-13T19:41:47.571060356Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.573360 containerd[1486]: time="2025-02-13T19:41:47.573307551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.573948 containerd[1486]: time="2025-02-13T19:41:47.573912034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.245818952s" Feb 13 19:41:47.573996 containerd[1486]: time="2025-02-13T19:41:47.573947030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:41:47.574571 containerd[1486]: time="2025-02-13T19:41:47.574521487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:41:48.087100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534343221.mount: Deactivated successfully. Feb 13 19:41:50.140890 containerd[1486]: time="2025-02-13T19:41:50.140811667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:50.155799 containerd[1486]: time="2025-02-13T19:41:50.155740975Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:41:50.159995 containerd[1486]: time="2025-02-13T19:41:50.159948906Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:50.170784 containerd[1486]: time="2025-02-13T19:41:50.170728432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:50.171829 containerd[1486]: time="2025-02-13T19:41:50.171792918Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.597232989s" Feb 13 19:41:50.171829 containerd[1486]: time="2025-02-13T19:41:50.171824377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:41:50.172377 containerd[1486]: time="2025-02-13T19:41:50.172341697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:41:50.476726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:41:50.485586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:50.642907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:50.647236 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:50.763321 kubelet[2027]: E0213 19:41:50.763118 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:50.767786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:50.768024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:53.074623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236925337.mount: Deactivated successfully. Feb 13 19:41:53.215853 containerd[1486]: time="2025-02-13T19:41:53.215767878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:53.244282 containerd[1486]: time="2025-02-13T19:41:53.244219735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:41:53.267774 containerd[1486]: time="2025-02-13T19:41:53.267720978Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:53.301476 containerd[1486]: time="2025-02-13T19:41:53.301403956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:53.302520 containerd[1486]: time="2025-02-13T19:41:53.302483150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.130110565s" Feb 13 19:41:53.302520 containerd[1486]: time="2025-02-13T19:41:53.302516703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:41:53.303076 containerd[1486]: time="2025-02-13T19:41:53.303040956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:41:54.957905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139195278.mount: Deactivated successfully. Feb 13 19:41:56.830370 containerd[1486]: time="2025-02-13T19:41:56.830301559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.831366 containerd[1486]: time="2025-02-13T19:41:56.831275966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:41:56.833583 containerd[1486]: time="2025-02-13T19:41:56.833551343Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.836310 containerd[1486]: time="2025-02-13T19:41:56.836281754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.837542 containerd[1486]: time="2025-02-13T19:41:56.837509256Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.534435368s" Feb 13 19:41:56.837542 containerd[1486]: time="2025-02-13T19:41:56.837535875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:41:59.908744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:59.918642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:59.941271 systemd[1]: Reloading requested from client PID 2124 ('systemctl') (unit session-7.scope)... Feb 13 19:41:59.941284 systemd[1]: Reloading... Feb 13 19:42:00.018446 zram_generator::config[2163]: No configuration found. Feb 13 19:42:00.486731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:00.574546 systemd[1]: Reloading finished in 632 ms. Feb 13 19:42:00.632589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:00.635878 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:42:00.636190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:00.646710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:00.809288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:00.814941 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:42:00.854775 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:00.854775 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:42:00.854775 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:00.855165 kubelet[2213]: I0213 19:42:00.854846 2213 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:42:01.194291 kubelet[2213]: I0213 19:42:01.194191 2213 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:42:01.194291 kubelet[2213]: I0213 19:42:01.194220 2213 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:42:01.194527 kubelet[2213]: I0213 19:42:01.194509 2213 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:42:01.213937 kubelet[2213]: E0213 19:42:01.213891 2213 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:01.215399 kubelet[2213]: I0213 19:42:01.215358 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:42:01.222473 kubelet[2213]: E0213 19:42:01.222436 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:42:01.222658 kubelet[2213]: I0213 19:42:01.222628 2213 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:42:01.227945 kubelet[2213]: I0213 19:42:01.227917 2213 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:42:01.229029 kubelet[2213]: I0213 19:42:01.228983 2213 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:42:01.229198 kubelet[2213]: I0213 19:42:01.229020 2213 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:42:01.229292 kubelet[2213]: I0213 19:42:01.229203 2213 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:42:01.229292 kubelet[2213]: I0213 19:42:01.229212 2213 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:42:01.229386 kubelet[2213]: I0213 19:42:01.229363 2213 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:01.232184 kubelet[2213]: I0213 19:42:01.232159 2213 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:42:01.232184 kubelet[2213]: I0213 19:42:01.232181 2213 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:42:01.232239 kubelet[2213]: I0213 19:42:01.232201 2213 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:42:01.232239 kubelet[2213]: I0213 19:42:01.232215 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:42:01.237542 kubelet[2213]: W0213 19:42:01.237478 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:01.237830 kubelet[2213]: I0213 19:42:01.237687 2213 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:42:01.237830 kubelet[2213]: E0213 19:42:01.237713 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:01.237830 kubelet[2213]: W0213 19:42:01.237577 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:01.237830 kubelet[2213]: E0213 19:42:01.237747 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:01.238317 kubelet[2213]: I0213 19:42:01.238303 2213 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:42:01.239745 kubelet[2213]: W0213 19:42:01.239711 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:42:01.241946 kubelet[2213]: I0213 19:42:01.241928 2213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:42:01.242002 kubelet[2213]: I0213 19:42:01.241968 2213 server.go:1287] "Started kubelet" Feb 13 19:42:01.243286 kubelet[2213]: I0213 19:42:01.242433 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:42:01.243286 kubelet[2213]: I0213 19:42:01.242797 2213 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:42:01.243286 kubelet[2213]: I0213 19:42:01.242776 2213 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:42:01.243371 kubelet[2213]: I0213 19:42:01.243327 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:42:01.243991 kubelet[2213]: I0213 19:42:01.243904 2213 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:42:01.244830 kubelet[2213]: E0213 19:42:01.244729 2213 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:42:01.244871 kubelet[2213]: I0213 19:42:01.244850 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:42:01.245219 kubelet[2213]: E0213 19:42:01.245199 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.245269 kubelet[2213]: I0213 19:42:01.245227 2213 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:42:01.245404 kubelet[2213]: I0213 19:42:01.245374 2213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:42:01.245465 kubelet[2213]: I0213 19:42:01.245437 2213 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:42:01.246192 kubelet[2213]: W0213 19:42:01.245696 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:01.246192 kubelet[2213]: E0213 19:42:01.245735 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:01.246192 kubelet[2213]: E0213 19:42:01.245985 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Feb 13 19:42:01.246192 kubelet[2213]: E0213 19:42:01.245192 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dbf3cc0adaf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:42:01.241942775 +0000 UTC m=+0.422623715,LastTimestamp:2025-02-13 19:42:01.241942775 +0000 UTC m=+0.422623715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:42:01.246599 kubelet[2213]: I0213 19:42:01.246567 2213 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:42:01.246714 kubelet[2213]: I0213 19:42:01.246651 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:42:01.247547 kubelet[2213]: I0213 19:42:01.247529 2213 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:42:01.259311 kubelet[2213]: I0213 19:42:01.259252 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:42:01.261453 kubelet[2213]: I0213 19:42:01.261396 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:42:01.261559 kubelet[2213]: I0213 19:42:01.261469 2213 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:42:01.261559 kubelet[2213]: I0213 19:42:01.261498 2213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:42:01.261559 kubelet[2213]: I0213 19:42:01.261508 2213 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:42:01.261653 kubelet[2213]: E0213 19:42:01.261590 2213 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:42:01.262649 kubelet[2213]: W0213 19:42:01.262398 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:01.262649 kubelet[2213]: E0213 19:42:01.262461 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:01.262649 kubelet[2213]: I0213 19:42:01.262526 2213 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:42:01.262649 kubelet[2213]: I0213 19:42:01.262534 2213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:42:01.262649 kubelet[2213]: I0213 19:42:01.262552 2213 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:01.346269 kubelet[2213]: E0213 19:42:01.346232 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.362455 kubelet[2213]: E0213 19:42:01.362404 2213 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:01.446846 kubelet[2213]: E0213 19:42:01.446771 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.447180 kubelet[2213]: E0213 19:42:01.447141 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Feb 13 19:42:01.547364 kubelet[2213]: E0213 19:42:01.547322 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.563482 kubelet[2213]: E0213 19:42:01.563453 2213 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:01.647970 kubelet[2213]: E0213 19:42:01.647931 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.748923 kubelet[2213]: E0213 19:42:01.748875 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.847834 kubelet[2213]: E0213 19:42:01.847777 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Feb 13 19:42:01.849845 kubelet[2213]: E0213 19:42:01.849787 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.950349 kubelet[2213]: E0213 19:42:01.950263 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:01.964492 kubelet[2213]: E0213 19:42:01.964439 2213 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:02.051204 kubelet[2213]: E0213 19:42:02.051013 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.151602 kubelet[2213]: E0213 19:42:02.151531 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.252106 kubelet[2213]: E0213 19:42:02.252056 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.270692 kubelet[2213]: W0213 19:42:02.270668 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:02.270785 kubelet[2213]: E0213 19:42:02.270705 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:02.352385 kubelet[2213]: E0213 19:42:02.352268 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.404278 kubelet[2213]: W0213 19:42:02.404222 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:02.404278 kubelet[2213]: E0213 19:42:02.404282 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:02.452987 kubelet[2213]: E0213 19:42:02.452925 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.553518 kubelet[2213]: E0213 19:42:02.553457 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.645352 kubelet[2213]: W0213 19:42:02.645193 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:02.645352 kubelet[2213]: E0213 19:42:02.645250 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:02.648769 kubelet[2213]: E0213 19:42:02.648728 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="1.6s" Feb 13 19:42:02.653867 kubelet[2213]: E0213 19:42:02.653838 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.685405 kubelet[2213]: W0213 19:42:02.685374 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:02.685532 kubelet[2213]: E0213 19:42:02.685418 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:02.754391 kubelet[2213]: E0213 19:42:02.754343 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.765578 kubelet[2213]: E0213 19:42:02.765537 2213 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:02.855075 kubelet[2213]: E0213 19:42:02.855023 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:02.955603 kubelet[2213]: E0213 19:42:02.955459 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:03.056077 kubelet[2213]: E0213 19:42:03.056013 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:03.145875 kubelet[2213]: I0213 19:42:03.145809 2213 policy_none.go:49] "None policy: Start" Feb 13 19:42:03.145875 kubelet[2213]: I0213 19:42:03.145856 2213 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:42:03.145875 kubelet[2213]: I0213 19:42:03.145873 2213 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:42:03.156612 kubelet[2213]: E0213 19:42:03.156567 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:03.257260 kubelet[2213]: E0213 19:42:03.257205 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:03.357758 kubelet[2213]: E0213 19:42:03.357677 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:03.364199 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:42:03.373005 kubelet[2213]: E0213 19:42:03.369650 2213 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:03.377127 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:42:03.379939 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:42:03.395381 kubelet[2213]: I0213 19:42:03.395360 2213 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:42:03.395617 kubelet[2213]: I0213 19:42:03.395604 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:42:03.395673 kubelet[2213]: I0213 19:42:03.395619 2213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:42:03.396035 kubelet[2213]: I0213 19:42:03.395856 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:42:03.396702 kubelet[2213]: E0213 19:42:03.396666 2213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:42:03.396839 kubelet[2213]: E0213 19:42:03.396713 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:42:03.497645 kubelet[2213]: I0213 19:42:03.497599 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:03.498005 kubelet[2213]: E0213 19:42:03.497955 2213 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Feb 13 19:42:03.699759 kubelet[2213]: I0213 19:42:03.699671 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:03.700353 kubelet[2213]: E0213 19:42:03.700126 2213 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Feb 13 19:42:04.101883 kubelet[2213]: I0213 19:42:04.101857 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:04.102256 kubelet[2213]: E0213 19:42:04.102146 2213 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Feb 13 19:42:04.249996 kubelet[2213]: E0213 19:42:04.249966 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="3.2s" Feb 13 19:42:04.373654 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:42:04.386213 kubelet[2213]: E0213 19:42:04.386184 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:04.388015 systemd[1]: Created slice kubepods-burstable-poda172cf5f106beaae2f306e4dcb578ad0.slice - libcontainer container kubepods-burstable-poda172cf5f106beaae2f306e4dcb578ad0.slice. Feb 13 19:42:04.400447 kubelet[2213]: E0213 19:42:04.400411 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:04.402893 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:42:04.404518 kubelet[2213]: E0213 19:42:04.404495 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:04.407908 kubelet[2213]: W0213 19:42:04.407864 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:04.407970 kubelet[2213]: E0213 19:42:04.407918 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:04.464489 kubelet[2213]: I0213 19:42:04.464469 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:04.464573 kubelet[2213]: I0213 19:42:04.464493 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:04.464573 kubelet[2213]: I0213 19:42:04.464511 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:04.464573 kubelet[2213]: I0213 19:42:04.464537 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:04.464573 kubelet[2213]: I0213 19:42:04.464551 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:04.464573 kubelet[2213]: I0213 19:42:04.464565 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:04.464729 kubelet[2213]: I0213 19:42:04.464592 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:04.464729 kubelet[2213]: I0213 19:42:04.464617 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:04.464729 kubelet[2213]: I0213 19:42:04.464636 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:04.494859 kubelet[2213]: W0213 19:42:04.494806 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:04.494859 kubelet[2213]: E0213 19:42:04.494850 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:04.687293 kubelet[2213]: E0213 19:42:04.687182 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:04.687940 containerd[1486]: time="2025-02-13T19:42:04.687898742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:04.701015 kubelet[2213]: E0213 19:42:04.700983 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:04.701272 containerd[1486]: time="2025-02-13T19:42:04.701243440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a172cf5f106beaae2f306e4dcb578ad0,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:04.705478 kubelet[2213]: E0213 19:42:04.705451 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:04.705688 containerd[1486]: time="2025-02-13T19:42:04.705665328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:04.904337 kubelet[2213]: I0213 19:42:04.904283 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:04.904779 kubelet[2213]: E0213 19:42:04.904722 2213 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Feb 13 19:42:05.220994 kubelet[2213]: W0213 19:42:05.220921 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:05.220994 kubelet[2213]: E0213 19:42:05.220985 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:05.388782 kubelet[2213]: W0213 19:42:05.388690 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Feb 13 19:42:05.388782 kubelet[2213]: E0213 19:42:05.388767 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:05.872377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603563065.mount: Deactivated successfully. Feb 13 19:42:06.002652 containerd[1486]: time="2025-02-13T19:42:06.002567695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:06.033053 containerd[1486]: time="2025-02-13T19:42:06.033002264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:42:06.056185 containerd[1486]: time="2025-02-13T19:42:06.056134943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:06.061591 containerd[1486]: time="2025-02-13T19:42:06.061508697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:06.090130 containerd[1486]: time="2025-02-13T19:42:06.090093808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:42:06.103114 containerd[1486]: time="2025-02-13T19:42:06.103079882Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:06.118317 containerd[1486]: time="2025-02-13T19:42:06.118272936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:06.119167 containerd[1486]: time="2025-02-13T19:42:06.119133383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.431128148s" Feb 13 19:42:06.130830 containerd[1486]: time="2025-02-13T19:42:06.130692874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:42:06.172793 containerd[1486]: time="2025-02-13T19:42:06.172745837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.471438256s" Feb 13 19:42:06.192354 containerd[1486]: time="2025-02-13T19:42:06.192316720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.486605083s" Feb 13 19:42:06.506814 kubelet[2213]: I0213 19:42:06.506769 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:06.507291 kubelet[2213]: E0213 19:42:06.507093 2213 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Feb 13 19:42:06.581914 containerd[1486]: time="2025-02-13T19:42:06.581239859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:06.581914 containerd[1486]: time="2025-02-13T19:42:06.581855110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:06.582101 containerd[1486]: time="2025-02-13T19:42:06.581870048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.582101 containerd[1486]: time="2025-02-13T19:42:06.581954659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.612579 systemd[1]: Started cri-containerd-fbd8db9b8f347e5b4df3ad7301c8b0ada0ad452bec5c9222053aed349b15c587.scope - libcontainer container fbd8db9b8f347e5b4df3ad7301c8b0ada0ad452bec5c9222053aed349b15c587. Feb 13 19:42:06.646299 containerd[1486]: time="2025-02-13T19:42:06.646252952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbd8db9b8f347e5b4df3ad7301c8b0ada0ad452bec5c9222053aed349b15c587\"" Feb 13 19:42:06.647495 kubelet[2213]: E0213 19:42:06.647470 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:06.649100 containerd[1486]: time="2025-02-13T19:42:06.649068178Z" level=info msg="CreateContainer within sandbox \"fbd8db9b8f347e5b4df3ad7301c8b0ada0ad452bec5c9222053aed349b15c587\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:42:06.695141 containerd[1486]: time="2025-02-13T19:42:06.694959643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:06.695141 containerd[1486]: time="2025-02-13T19:42:06.695022192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:06.695141 containerd[1486]: time="2025-02-13T19:42:06.695041078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.695980 containerd[1486]: time="2025-02-13T19:42:06.695900583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.713560 systemd[1]: Started cri-containerd-2c173f8d1b173b12be626262b6c1ece930dc55b40d849eac025c482782fa0ac1.scope - libcontainer container 2c173f8d1b173b12be626262b6c1ece930dc55b40d849eac025c482782fa0ac1. Feb 13 19:42:06.726359 containerd[1486]: time="2025-02-13T19:42:06.726175449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:06.726359 containerd[1486]: time="2025-02-13T19:42:06.726223951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:06.726359 containerd[1486]: time="2025-02-13T19:42:06.726253438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.727045 containerd[1486]: time="2025-02-13T19:42:06.727006510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:06.746645 systemd[1]: Started cri-containerd-d27ef5bcf1c16f1894584c816f10f526a5dea2040c131807f1398c73df13e7e5.scope - libcontainer container d27ef5bcf1c16f1894584c816f10f526a5dea2040c131807f1398c73df13e7e5. Feb 13 19:42:06.756888 containerd[1486]: time="2025-02-13T19:42:06.756716221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c173f8d1b173b12be626262b6c1ece930dc55b40d849eac025c482782fa0ac1\"" Feb 13 19:42:06.757647 kubelet[2213]: E0213 19:42:06.757608 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:06.759222 containerd[1486]: time="2025-02-13T19:42:06.759200366Z" level=info msg="CreateContainer within sandbox \"2c173f8d1b173b12be626262b6c1ece930dc55b40d849eac025c482782fa0ac1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:42:06.786042 containerd[1486]: time="2025-02-13T19:42:06.785998027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a172cf5f106beaae2f306e4dcb578ad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d27ef5bcf1c16f1894584c816f10f526a5dea2040c131807f1398c73df13e7e5\"" Feb 13 19:42:06.786688 kubelet[2213]: E0213 19:42:06.786662 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:06.788566 containerd[1486]: time="2025-02-13T19:42:06.788404786Z" level=info msg="CreateContainer within sandbox \"d27ef5bcf1c16f1894584c816f10f526a5dea2040c131807f1398c73df13e7e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:42:07.220863 containerd[1486]: time="2025-02-13T19:42:07.220767434Z" level=info msg="CreateContainer within sandbox \"fbd8db9b8f347e5b4df3ad7301c8b0ada0ad452bec5c9222053aed349b15c587\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1bd9fac51babb8b98cf79c1d2abe86ee8ac9307a3be134308e3ebc719c40025\"" Feb 13 19:42:07.221433 containerd[1486]: time="2025-02-13T19:42:07.221401770Z" level=info msg="StartContainer for \"d1bd9fac51babb8b98cf79c1d2abe86ee8ac9307a3be134308e3ebc719c40025\"" Feb 13 19:42:07.246572 systemd[1]: Started cri-containerd-d1bd9fac51babb8b98cf79c1d2abe86ee8ac9307a3be134308e3ebc719c40025.scope - libcontainer container d1bd9fac51babb8b98cf79c1d2abe86ee8ac9307a3be134308e3ebc719c40025. Feb 13 19:42:07.329012 containerd[1486]: time="2025-02-13T19:42:07.328946023Z" level=info msg="StartContainer for \"d1bd9fac51babb8b98cf79c1d2abe86ee8ac9307a3be134308e3ebc719c40025\" returns successfully" Feb 13 19:42:07.407695 containerd[1486]: time="2025-02-13T19:42:07.407638476Z" level=info msg="CreateContainer within sandbox \"2c173f8d1b173b12be626262b6c1ece930dc55b40d849eac025c482782fa0ac1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0170134f22cde24cb836e7f7ebb0ad6868b92679ff989e7a9380b9f85fa09b4a\"" Feb 13 19:42:07.408262 containerd[1486]: time="2025-02-13T19:42:07.408212277Z" level=info msg="StartContainer for \"0170134f22cde24cb836e7f7ebb0ad6868b92679ff989e7a9380b9f85fa09b4a\"" Feb 13 19:42:07.421579 kubelet[2213]: E0213 19:42:07.421514 2213 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:42:07.434155 systemd[1]: Started cri-containerd-0170134f22cde24cb836e7f7ebb0ad6868b92679ff989e7a9380b9f85fa09b4a.scope - libcontainer container 0170134f22cde24cb836e7f7ebb0ad6868b92679ff989e7a9380b9f85fa09b4a. Feb 13 19:42:07.450801 containerd[1486]: time="2025-02-13T19:42:07.450617398Z" level=info msg="CreateContainer within sandbox \"d27ef5bcf1c16f1894584c816f10f526a5dea2040c131807f1398c73df13e7e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd9404c536e54f92aa725bbf5d6c1ce81ed84761e8ef40b3d92952f605ce2354\"" Feb 13 19:42:07.450926 kubelet[2213]: E0213 19:42:07.450730 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="6.4s" Feb 13 19:42:07.451268 containerd[1486]: time="2025-02-13T19:42:07.451227899Z" level=info msg="StartContainer for \"bd9404c536e54f92aa725bbf5d6c1ce81ed84761e8ef40b3d92952f605ce2354\"" Feb 13 19:42:07.482564 systemd[1]: Started cri-containerd-bd9404c536e54f92aa725bbf5d6c1ce81ed84761e8ef40b3d92952f605ce2354.scope - libcontainer container bd9404c536e54f92aa725bbf5d6c1ce81ed84761e8ef40b3d92952f605ce2354. Feb 13 19:42:07.508928 containerd[1486]: time="2025-02-13T19:42:07.508866676Z" level=info msg="StartContainer for \"0170134f22cde24cb836e7f7ebb0ad6868b92679ff989e7a9380b9f85fa09b4a\" returns successfully" Feb 13 19:42:07.580767 containerd[1486]: time="2025-02-13T19:42:07.580707786Z" level=info msg="StartContainer for \"bd9404c536e54f92aa725bbf5d6c1ce81ed84761e8ef40b3d92952f605ce2354\" returns successfully" Feb 13 19:42:08.279367 kubelet[2213]: E0213 19:42:08.279333 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:08.279784 kubelet[2213]: E0213 19:42:08.279481 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:08.281529 kubelet[2213]: E0213 19:42:08.281488 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:08.281699 kubelet[2213]: E0213 19:42:08.281612 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:08.281924 kubelet[2213]: E0213 19:42:08.281902 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:08.282052 kubelet[2213]: E0213 19:42:08.282033 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:08.899800 kubelet[2213]: E0213 19:42:08.899670 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dbf3cc0adaf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:42:01.241942775 +0000 UTC m=+0.422623715,LastTimestamp:2025-02-13 19:42:01.241942775 +0000 UTC m=+0.422623715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:42:08.917330 kubelet[2213]: E0213 19:42:08.917217 2213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:42:08.956667 kubelet[2213]: E0213 19:42:08.956531 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dbf3cc353ee5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:42:01.244720869 +0000 UTC m=+0.425401809,LastTimestamp:2025-02-13 19:42:01.244720869 +0000 UTC m=+0.425401809,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:42:09.283660 kubelet[2213]: E0213 19:42:09.283632 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:09.284101 kubelet[2213]: E0213 19:42:09.283712 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:09.284101 kubelet[2213]: E0213 19:42:09.283751 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:09.284101 kubelet[2213]: E0213 19:42:09.283813 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:09.284101 kubelet[2213]: E0213 19:42:09.283817 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:09.284101 kubelet[2213]: E0213 19:42:09.283908 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:09.399074 kubelet[2213]: E0213 19:42:09.399016 2213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:42:09.709381 kubelet[2213]: I0213 19:42:09.709226 2213 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:09.759259 kubelet[2213]: I0213 19:42:09.759217 2213 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:42:09.759259 kubelet[2213]: E0213 19:42:09.759262 2213 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:42:09.810631 kubelet[2213]: E0213 19:42:09.810596 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:09.910915 kubelet[2213]: E0213 19:42:09.910866 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.011219 kubelet[2213]: E0213 19:42:10.011171 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.111821 kubelet[2213]: E0213 19:42:10.111765 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.212480 kubelet[2213]: E0213 19:42:10.212413 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.285158 kubelet[2213]: E0213 19:42:10.285045 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:10.285582 kubelet[2213]: E0213 19:42:10.285171 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:10.285582 kubelet[2213]: E0213 19:42:10.285174 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:10.285582 kubelet[2213]: E0213 19:42:10.285264 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:10.285582 kubelet[2213]: E0213 19:42:10.285274 2213 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:42:10.285582 kubelet[2213]: E0213 19:42:10.285350 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:10.312739 kubelet[2213]: E0213 19:42:10.312682 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.413494 kubelet[2213]: E0213 19:42:10.413416 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.514052 kubelet[2213]: E0213 19:42:10.514011 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.614771 kubelet[2213]: E0213 19:42:10.614629 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.715732 kubelet[2213]: E0213 19:42:10.715678 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.816588 kubelet[2213]: E0213 19:42:10.816537 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.917506 kubelet[2213]: E0213 19:42:10.917361 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.017978 kubelet[2213]: E0213 19:42:11.017931 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.118489 kubelet[2213]: E0213 19:42:11.118455 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.219485 kubelet[2213]: E0213 19:42:11.219380 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.319528 kubelet[2213]: E0213 19:42:11.319487 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.420324 kubelet[2213]: E0213 19:42:11.420287 2213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.546293 kubelet[2213]: I0213 19:42:11.546263 2213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:11.589751 kubelet[2213]: I0213 19:42:11.589711 2213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:11.611160 kubelet[2213]: I0213 19:42:11.611118 2213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:11.865853 update_engine[1471]: I20250213 19:42:11.865670 1471 update_attempter.cc:509] Updating boot flags... Feb 13 19:42:11.935455 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2497) Feb 13 19:42:11.963475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2499) Feb 13 19:42:12.005452 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2499) Feb 13 19:42:12.237824 kubelet[2213]: I0213 19:42:12.237775 2213 apiserver.go:52] "Watching apiserver" Feb 13 19:42:12.243383 kubelet[2213]: E0213 19:42:12.242990 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:12.243383 kubelet[2213]: E0213 19:42:12.243180 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:12.243383 kubelet[2213]: E0213 19:42:12.243388 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:12.246061 kubelet[2213]: I0213 19:42:12.246027 2213 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:42:13.275479 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-7.scope)... Feb 13 19:42:13.275495 systemd[1]: Reloading... Feb 13 19:42:13.349475 zram_generator::config[2552]: No configuration found. Feb 13 19:42:13.460887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:13.553945 systemd[1]: Reloading finished in 278 ms. Feb 13 19:42:13.597903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:13.612102 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:42:13.612484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:13.612560 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 127.2M memory peak, 0B memory swap peak. Feb 13 19:42:13.622751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:13.773246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:13.778960 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:42:13.810740 kubelet[2594]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:13.810740 kubelet[2594]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:42:13.810740 kubelet[2594]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:13.811082 kubelet[2594]: I0213 19:42:13.810721 2594 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:42:13.817853 kubelet[2594]: I0213 19:42:13.817811 2594 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:42:13.817853 kubelet[2594]: I0213 19:42:13.817836 2594 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:42:13.818089 kubelet[2594]: I0213 19:42:13.818075 2594 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:42:13.819188 kubelet[2594]: I0213 19:42:13.819169 2594 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:42:13.821477 kubelet[2594]: I0213 19:42:13.821457 2594 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:42:13.827153 kubelet[2594]: E0213 19:42:13.827103 2594 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:42:13.827153 kubelet[2594]: I0213 19:42:13.827149 2594 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:42:13.832545 kubelet[2594]: I0213 19:42:13.832512 2594 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:42:13.832823 kubelet[2594]: I0213 19:42:13.832767 2594 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:42:13.833000 kubelet[2594]: I0213 19:42:13.832797 2594 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:42:13.833094 kubelet[2594]: I0213 19:42:13.833001 2594 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:42:13.833094 kubelet[2594]: I0213 19:42:13.833012 2594 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:42:13.833094 kubelet[2594]: I0213 19:42:13.833062 2594 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:13.833278 kubelet[2594]: I0213 19:42:13.833260 2594 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:42:13.833310 kubelet[2594]: I0213 19:42:13.833279 2594 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:42:13.833310 kubelet[2594]: I0213 19:42:13.833300 2594 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:42:13.833654 kubelet[2594]: I0213 19:42:13.833312 2594 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:42:13.834284 kubelet[2594]: I0213 19:42:13.834245 2594 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:42:13.834716 kubelet[2594]: I0213 19:42:13.834685 2594 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.835303 2594 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.835328 2594 server.go:1287] "Started kubelet" Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.835475 2594 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.835716 2594 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.835951 2594 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:42:13.837648 kubelet[2594]: I0213 19:42:13.837452 2594 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:42:13.838261 kubelet[2594]: I0213 19:42:13.838246 2594 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:42:13.844790 kubelet[2594]: I0213 19:42:13.844759 2594 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:42:13.847401 kubelet[2594]: E0213 19:42:13.847371 2594 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:42:13.848287 kubelet[2594]: I0213 19:42:13.848271 2594 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:42:13.848542 kubelet[2594]: I0213 19:42:13.848502 2594 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:42:13.848853 kubelet[2594]: I0213 19:42:13.848838 2594 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:42:13.853111 kubelet[2594]: I0213 19:42:13.853086 2594 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:42:13.853493 kubelet[2594]: I0213 19:42:13.853472 2594 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:42:13.855827 kubelet[2594]: I0213 19:42:13.855803 2594 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:42:13.858845 kubelet[2594]: I0213 19:42:13.858801 2594 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:42:13.860623 kubelet[2594]: I0213 19:42:13.860585 2594 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:42:13.860693 kubelet[2594]: I0213 19:42:13.860631 2594 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:42:13.860693 kubelet[2594]: I0213 19:42:13.860660 2594 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:42:13.860693 kubelet[2594]: I0213 19:42:13.860668 2594 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:42:13.860782 kubelet[2594]: E0213 19:42:13.860719 2594 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:42:13.884928 kubelet[2594]: I0213 19:42:13.884898 2594 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:42:13.884928 kubelet[2594]: I0213 19:42:13.884914 2594 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:42:13.884928 kubelet[2594]: I0213 19:42:13.884932 2594 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:13.885088 kubelet[2594]: I0213 19:42:13.885075 2594 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:42:13.885108 kubelet[2594]: I0213 19:42:13.885084 2594 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:42:13.885108 kubelet[2594]: I0213 19:42:13.885102 2594 policy_none.go:49] "None policy: Start" Feb 13 19:42:13.885145 kubelet[2594]: I0213 19:42:13.885111 2594 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:42:13.885145 kubelet[2594]: I0213 19:42:13.885120 2594 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:42:13.885225 kubelet[2594]: I0213 19:42:13.885208 2594 state_mem.go:75] "Updated machine memory state" Feb 13 19:42:13.889128 kubelet[2594]: I0213 19:42:13.889108 2594 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:42:13.889297 kubelet[2594]: I0213 19:42:13.889264 2594 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:42:13.889297 kubelet[2594]: I0213 19:42:13.889277 2594 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:42:13.889488 kubelet[2594]: I0213 19:42:13.889469 2594 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:42:13.890700 kubelet[2594]: E0213 19:42:13.890675 2594 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:42:13.961658 kubelet[2594]: I0213 19:42:13.961624 2594 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:13.961798 kubelet[2594]: I0213 19:42:13.961734 2594 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:13.961798 kubelet[2594]: I0213 19:42:13.961629 2594 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.967962 kubelet[2594]: E0213 19:42:13.967862 2594 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.967962 kubelet[2594]: E0213 19:42:13.967940 2594 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:13.968327 kubelet[2594]: E0213 19:42:13.968297 2594 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:13.995352 kubelet[2594]: I0213 19:42:13.995329 2594 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:42:14.003030 kubelet[2594]: I0213 19:42:14.003002 2594 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:42:14.003118 kubelet[2594]: I0213 19:42:14.003089 2594 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:42:14.050618 kubelet[2594]: I0213 19:42:14.050560 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.050618 kubelet[2594]: I0213 19:42:14.050604 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.050618 kubelet[2594]: I0213 19:42:14.050629 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.050825 kubelet[2594]: I0213 19:42:14.050649 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.050825 kubelet[2594]: I0213 19:42:14.050666 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.050825 kubelet[2594]: I0213 19:42:14.050681 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.050825 kubelet[2594]: I0213 19:42:14.050696 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:14.050825 kubelet[2594]: I0213 19:42:14.050713 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.050938 kubelet[2594]: I0213 19:42:14.050731 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a172cf5f106beaae2f306e4dcb578ad0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a172cf5f106beaae2f306e4dcb578ad0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.258179 sudo[2629]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:42:14.258628 sudo[2629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:42:14.268617 kubelet[2594]: E0213 19:42:14.268518 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.268617 kubelet[2594]: E0213 19:42:14.268561 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.268617 kubelet[2594]: E0213 19:42:14.268519 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.719138 sudo[2629]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:14.833693 kubelet[2594]: I0213 19:42:14.833648 2594 apiserver.go:52] "Watching apiserver" Feb 13 19:42:14.849207 kubelet[2594]: I0213 19:42:14.849167 2594 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:42:14.870665 kubelet[2594]: I0213 19:42:14.869624 2594 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.870665 kubelet[2594]: E0213 19:42:14.869671 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.870665 kubelet[2594]: I0213 19:42:14.869837 2594 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:14.876948 kubelet[2594]: E0213 19:42:14.876921 2594 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.877057 kubelet[2594]: E0213 19:42:14.877042 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.877745 kubelet[2594]: E0213 19:42:14.877244 2594 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:14.877745 kubelet[2594]: E0213 19:42:14.877324 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.887074 kubelet[2594]: I0213 19:42:14.886993 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.886967631 podStartE2EDuration="3.886967631s" podCreationTimestamp="2025-02-13 19:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.886707189 +0000 UTC m=+1.103991800" watchObservedRunningTime="2025-02-13 19:42:14.886967631 +0000 UTC m=+1.104252242" Feb 13 19:42:14.901691 kubelet[2594]: I0213 19:42:14.901516 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.901497285 podStartE2EDuration="3.901497285s" podCreationTimestamp="2025-02-13 19:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.895107388 +0000 UTC m=+1.112391999" watchObservedRunningTime="2025-02-13 19:42:14.901497285 +0000 UTC m=+1.118781896" Feb 13 19:42:14.907753 kubelet[2594]: I0213 19:42:14.907693 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.907672946 podStartE2EDuration="3.907672946s" podCreationTimestamp="2025-02-13 19:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.90146811 +0000 UTC m=+1.118752721" watchObservedRunningTime="2025-02-13 19:42:14.907672946 +0000 UTC m=+1.124957557" Feb 13 19:42:15.870630 kubelet[2594]: E0213 19:42:15.870593 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:15.871013 kubelet[2594]: E0213 19:42:15.870806 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:15.877073 sudo[1662]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:15.878392 sshd[1661]: Connection closed by 10.0.0.1 port 33310 Feb 13 19:42:15.878849 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:15.882906 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:33310.service: Deactivated successfully. Feb 13 19:42:15.884787 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:42:15.884968 systemd[1]: session-7.scope: Consumed 5.741s CPU time, 155.5M memory peak, 0B memory swap peak. Feb 13 19:42:15.885474 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:42:15.886250 systemd-logind[1469]: Removed session 7. Feb 13 19:42:16.871799 kubelet[2594]: E0213 19:42:16.871756 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:17.792866 kubelet[2594]: E0213 19:42:17.792830 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:17.873007 kubelet[2594]: E0213 19:42:17.872967 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:19.126576 kubelet[2594]: I0213 19:42:19.126417 2594 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:42:19.127073 containerd[1486]: time="2025-02-13T19:42:19.126948715Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:42:19.127360 kubelet[2594]: I0213 19:42:19.127128 2594 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:42:19.807849 systemd[1]: Created slice kubepods-besteffort-pod9260147e_f0cb_4c3a_aaa3_c54af38d6ce1.slice - libcontainer container kubepods-besteffort-pod9260147e_f0cb_4c3a_aaa3_c54af38d6ce1.slice. Feb 13 19:42:19.825692 systemd[1]: Created slice kubepods-burstable-pode9a5ac7b_add9_4b57_a754_d102b5796ea9.slice - libcontainer container kubepods-burstable-pode9a5ac7b_add9_4b57_a754_d102b5796ea9.slice. Feb 13 19:42:19.884208 kubelet[2594]: I0213 19:42:19.884170 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-net\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884208 kubelet[2594]: I0213 19:42:19.884201 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-config-path\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884302 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-kernel\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884334 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9260147e-f0cb-4c3a-aaa3-c54af38d6ce1-xtables-lock\") pod \"kube-proxy-f6l5p\" (UID: \"9260147e-f0cb-4c3a-aaa3-c54af38d6ce1\") " pod="kube-system/kube-proxy-f6l5p" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884354 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hostproc\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884370 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-lib-modules\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884385 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-xtables-lock\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884493 kubelet[2594]: I0213 19:42:19.884415 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-etc-cni-netd\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884464 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hubble-tls\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884534 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9260147e-f0cb-4c3a-aaa3-c54af38d6ce1-lib-modules\") pod \"kube-proxy-f6l5p\" (UID: \"9260147e-f0cb-4c3a-aaa3-c54af38d6ce1\") " pod="kube-system/kube-proxy-f6l5p" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884570 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-run\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884586 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-bpf-maps\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884632 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glpfq\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-kube-api-access-glpfq\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884668 kubelet[2594]: I0213 19:42:19.884648 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9260147e-f0cb-4c3a-aaa3-c54af38d6ce1-kube-proxy\") pod \"kube-proxy-f6l5p\" (UID: \"9260147e-f0cb-4c3a-aaa3-c54af38d6ce1\") " pod="kube-system/kube-proxy-f6l5p" Feb 13 19:42:19.884850 kubelet[2594]: I0213 19:42:19.884661 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgd49\" (UniqueName: \"kubernetes.io/projected/9260147e-f0cb-4c3a-aaa3-c54af38d6ce1-kube-api-access-lgd49\") pod \"kube-proxy-f6l5p\" (UID: \"9260147e-f0cb-4c3a-aaa3-c54af38d6ce1\") " pod="kube-system/kube-proxy-f6l5p" Feb 13 19:42:19.884850 kubelet[2594]: I0213 19:42:19.884689 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-cgroup\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884850 kubelet[2594]: I0213 19:42:19.884703 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cni-path\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:19.884850 kubelet[2594]: I0213 19:42:19.884717 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a5ac7b-add9-4b57-a754-d102b5796ea9-clustermesh-secrets\") pod \"cilium-9mjvr\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " pod="kube-system/cilium-9mjvr" Feb 13 19:42:20.114181 systemd[1]: Created slice kubepods-besteffort-podd23d8d0a_508e_4f4d_aaf8_6612560985c2.slice - libcontainer container kubepods-besteffort-podd23d8d0a_508e_4f4d_aaf8_6612560985c2.slice. Feb 13 19:42:20.122567 kubelet[2594]: E0213 19:42:20.122516 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.123148 containerd[1486]: time="2025-02-13T19:42:20.123113635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6l5p,Uid:9260147e-f0cb-4c3a-aaa3-c54af38d6ce1,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:20.131148 kubelet[2594]: E0213 19:42:20.131094 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.131652 containerd[1486]: time="2025-02-13T19:42:20.131619159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mjvr,Uid:e9a5ac7b-add9-4b57-a754-d102b5796ea9,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:20.186861 kubelet[2594]: I0213 19:42:20.186774 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pqp\" (UniqueName: \"kubernetes.io/projected/d23d8d0a-508e-4f4d-aaf8-6612560985c2-kube-api-access-w5pqp\") pod \"cilium-operator-6c4d7847fc-pt7vp\" (UID: \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\") " pod="kube-system/cilium-operator-6c4d7847fc-pt7vp" Feb 13 19:42:20.186861 kubelet[2594]: I0213 19:42:20.186825 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d23d8d0a-508e-4f4d-aaf8-6612560985c2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pt7vp\" (UID: \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\") " pod="kube-system/cilium-operator-6c4d7847fc-pt7vp" Feb 13 19:42:20.328504 containerd[1486]: time="2025-02-13T19:42:20.328381948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:20.328504 containerd[1486]: time="2025-02-13T19:42:20.328466608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:20.328504 containerd[1486]: time="2025-02-13T19:42:20.328488348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.328793 containerd[1486]: time="2025-02-13T19:42:20.328590932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.331800 containerd[1486]: time="2025-02-13T19:42:20.331687833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:20.331800 containerd[1486]: time="2025-02-13T19:42:20.331735874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:20.331800 containerd[1486]: time="2025-02-13T19:42:20.331750532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.331993 containerd[1486]: time="2025-02-13T19:42:20.331818480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.349567 systemd[1]: Started cri-containerd-0094525670d40d188882e67d46077ee3c8aa5ab093019725d656f09ed914b022.scope - libcontainer container 0094525670d40d188882e67d46077ee3c8aa5ab093019725d656f09ed914b022. Feb 13 19:42:20.352954 systemd[1]: Started cri-containerd-6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab.scope - libcontainer container 6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab. Feb 13 19:42:20.377308 containerd[1486]: time="2025-02-13T19:42:20.376971443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6l5p,Uid:9260147e-f0cb-4c3a-aaa3-c54af38d6ce1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0094525670d40d188882e67d46077ee3c8aa5ab093019725d656f09ed914b022\"" Feb 13 19:42:20.378569 kubelet[2594]: E0213 19:42:20.378546 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.380569 containerd[1486]: time="2025-02-13T19:42:20.380492856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mjvr,Uid:e9a5ac7b-add9-4b57-a754-d102b5796ea9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\"" Feb 13 19:42:20.380719 containerd[1486]: time="2025-02-13T19:42:20.380690198Z" level=info msg="CreateContainer within sandbox \"0094525670d40d188882e67d46077ee3c8aa5ab093019725d656f09ed914b022\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:42:20.382014 kubelet[2594]: E0213 19:42:20.381907 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.383130 containerd[1486]: time="2025-02-13T19:42:20.383106275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:42:20.398582 containerd[1486]: time="2025-02-13T19:42:20.398540500Z" level=info msg="CreateContainer within sandbox \"0094525670d40d188882e67d46077ee3c8aa5ab093019725d656f09ed914b022\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"663845c3f04044e6fd9a79671cf99d277ed0455b3a3c120c9ed2c5bcf8f42ae2\"" Feb 13 19:42:20.399110 containerd[1486]: time="2025-02-13T19:42:20.399086940Z" level=info msg="StartContainer for \"663845c3f04044e6fd9a79671cf99d277ed0455b3a3c120c9ed2c5bcf8f42ae2\"" Feb 13 19:42:20.418675 kubelet[2594]: E0213 19:42:20.418636 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.419158 containerd[1486]: time="2025-02-13T19:42:20.419120430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pt7vp,Uid:d23d8d0a-508e-4f4d-aaf8-6612560985c2,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:20.424587 systemd[1]: Started cri-containerd-663845c3f04044e6fd9a79671cf99d277ed0455b3a3c120c9ed2c5bcf8f42ae2.scope - libcontainer container 663845c3f04044e6fd9a79671cf99d277ed0455b3a3c120c9ed2c5bcf8f42ae2. Feb 13 19:42:20.444445 containerd[1486]: time="2025-02-13T19:42:20.444301941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:20.444555 containerd[1486]: time="2025-02-13T19:42:20.444517377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:20.444809 containerd[1486]: time="2025-02-13T19:42:20.444555549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.444809 containerd[1486]: time="2025-02-13T19:42:20.444746229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:20.459317 containerd[1486]: time="2025-02-13T19:42:20.459269374Z" level=info msg="StartContainer for \"663845c3f04044e6fd9a79671cf99d277ed0455b3a3c120c9ed2c5bcf8f42ae2\" returns successfully" Feb 13 19:42:20.461586 systemd[1]: Started cri-containerd-5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1.scope - libcontainer container 5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1. Feb 13 19:42:20.500096 containerd[1486]: time="2025-02-13T19:42:20.500045642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pt7vp,Uid:d23d8d0a-508e-4f4d-aaf8-6612560985c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\"" Feb 13 19:42:20.500843 kubelet[2594]: E0213 19:42:20.500813 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.879768 kubelet[2594]: E0213 19:42:20.879731 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.887802 kubelet[2594]: I0213 19:42:20.887722 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6l5p" podStartSLOduration=1.8877039020000002 podStartE2EDuration="1.887703902s" podCreationTimestamp="2025-02-13 19:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:20.887687911 +0000 UTC m=+7.104972522" watchObservedRunningTime="2025-02-13 19:42:20.887703902 +0000 UTC m=+7.104988513" Feb 13 19:42:22.591045 kubelet[2594]: E0213 19:42:22.590795 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:22.882466 kubelet[2594]: E0213 19:42:22.881969 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:23.883547 kubelet[2594]: E0213 19:42:23.883499 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:24.060109 kubelet[2594]: E0213 19:42:24.060065 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:24.889553 kubelet[2594]: E0213 19:42:24.889504 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:27.931013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669185112.mount: Deactivated successfully. Feb 13 19:42:32.394398 containerd[1486]: time="2025-02-13T19:42:32.394344943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:32.395210 containerd[1486]: time="2025-02-13T19:42:32.395147121Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:42:32.396335 containerd[1486]: time="2025-02-13T19:42:32.396283698Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:32.397886 containerd[1486]: time="2025-02-13T19:42:32.397849252Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.0147127s" Feb 13 19:42:32.397936 containerd[1486]: time="2025-02-13T19:42:32.397887043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:42:32.406033 containerd[1486]: time="2025-02-13T19:42:32.405997154Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:42:32.413183 containerd[1486]: time="2025-02-13T19:42:32.413146418Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:42:32.425452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777179739.mount: Deactivated successfully. Feb 13 19:42:32.426682 containerd[1486]: time="2025-02-13T19:42:32.426647195Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\"" Feb 13 19:42:32.429238 containerd[1486]: time="2025-02-13T19:42:32.429211677Z" level=info msg="StartContainer for \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\"" Feb 13 19:42:32.458560 systemd[1]: Started cri-containerd-0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20.scope - libcontainer container 0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20. Feb 13 19:42:32.487915 containerd[1486]: time="2025-02-13T19:42:32.487840593Z" level=info msg="StartContainer for \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\" returns successfully" Feb 13 19:42:32.495487 systemd[1]: cri-containerd-0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20.scope: Deactivated successfully. Feb 13 19:42:32.883211 containerd[1486]: time="2025-02-13T19:42:32.883125512Z" level=info msg="shim disconnected" id=0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20 namespace=k8s.io Feb 13 19:42:32.883211 containerd[1486]: time="2025-02-13T19:42:32.883184193Z" level=warning msg="cleaning up after shim disconnected" id=0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20 namespace=k8s.io Feb 13 19:42:32.883211 containerd[1486]: time="2025-02-13T19:42:32.883195224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:32.913019 kubelet[2594]: E0213 19:42:32.912986 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:32.914885 containerd[1486]: time="2025-02-13T19:42:32.914776860Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:42:32.931603 containerd[1486]: time="2025-02-13T19:42:32.931562504Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\"" Feb 13 19:42:32.932191 containerd[1486]: time="2025-02-13T19:42:32.932142675Z" level=info msg="StartContainer for \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\"" Feb 13 19:42:32.962567 systemd[1]: Started cri-containerd-98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14.scope - libcontainer container 98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14. Feb 13 19:42:32.987638 containerd[1486]: time="2025-02-13T19:42:32.987593586Z" level=info msg="StartContainer for \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\" returns successfully" Feb 13 19:42:32.999986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:42:33.000331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:33.000457 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:33.007855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:33.008125 systemd[1]: cri-containerd-98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14.scope: Deactivated successfully. Feb 13 19:42:33.038991 containerd[1486]: time="2025-02-13T19:42:33.038905881Z" level=info msg="shim disconnected" id=98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14 namespace=k8s.io Feb 13 19:42:33.038991 containerd[1486]: time="2025-02-13T19:42:33.038956367Z" level=warning msg="cleaning up after shim disconnected" id=98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14 namespace=k8s.io Feb 13 19:42:33.038991 containerd[1486]: time="2025-02-13T19:42:33.038964542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:33.040354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:33.423475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20-rootfs.mount: Deactivated successfully. Feb 13 19:42:33.918143 kubelet[2594]: E0213 19:42:33.918098 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:33.920770 containerd[1486]: time="2025-02-13T19:42:33.920363739Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:42:33.940153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377382468.mount: Deactivated successfully. Feb 13 19:42:34.461307 containerd[1486]: time="2025-02-13T19:42:34.461240413Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\"" Feb 13 19:42:34.461927 containerd[1486]: time="2025-02-13T19:42:34.461895254Z" level=info msg="StartContainer for \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\"" Feb 13 19:42:34.493679 systemd[1]: Started cri-containerd-11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891.scope - libcontainer container 11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891. Feb 13 19:42:34.528179 systemd[1]: cri-containerd-11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891.scope: Deactivated successfully. Feb 13 19:42:34.597911 containerd[1486]: time="2025-02-13T19:42:34.597866082Z" level=info msg="StartContainer for \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\" returns successfully" Feb 13 19:42:34.616119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891-rootfs.mount: Deactivated successfully. Feb 13 19:42:34.788842 containerd[1486]: time="2025-02-13T19:42:34.788775724Z" level=info msg="shim disconnected" id=11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891 namespace=k8s.io Feb 13 19:42:34.788842 containerd[1486]: time="2025-02-13T19:42:34.788824125Z" level=warning msg="cleaning up after shim disconnected" id=11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891 namespace=k8s.io Feb 13 19:42:34.788842 containerd[1486]: time="2025-02-13T19:42:34.788835857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:34.925442 kubelet[2594]: E0213 19:42:34.925396 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:34.930258 containerd[1486]: time="2025-02-13T19:42:34.929836760Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:42:35.675996 containerd[1486]: time="2025-02-13T19:42:35.675936533Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\"" Feb 13 19:42:35.676592 containerd[1486]: time="2025-02-13T19:42:35.676523696Z" level=info msg="StartContainer for \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\"" Feb 13 19:42:35.713634 systemd[1]: Started cri-containerd-0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56.scope - libcontainer container 0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56. Feb 13 19:42:35.755540 systemd[1]: cri-containerd-0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56.scope: Deactivated successfully. Feb 13 19:42:35.758718 containerd[1486]: time="2025-02-13T19:42:35.758672497Z" level=info msg="StartContainer for \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\" returns successfully" Feb 13 19:42:35.779736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56-rootfs.mount: Deactivated successfully. Feb 13 19:42:35.866097 containerd[1486]: time="2025-02-13T19:42:35.866037411Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:35.867067 containerd[1486]: time="2025-02-13T19:42:35.867005210Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:42:35.867942 containerd[1486]: time="2025-02-13T19:42:35.867886537Z" level=info msg="shim disconnected" id=0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56 namespace=k8s.io Feb 13 19:42:35.867942 containerd[1486]: time="2025-02-13T19:42:35.867933696Z" level=warning msg="cleaning up after shim disconnected" id=0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56 namespace=k8s.io Feb 13 19:42:35.868015 containerd[1486]: time="2025-02-13T19:42:35.867947201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:35.868559 containerd[1486]: time="2025-02-13T19:42:35.868513385Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:35.870159 containerd[1486]: time="2025-02-13T19:42:35.870111138Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.46407451s" Feb 13 19:42:35.870159 containerd[1486]: time="2025-02-13T19:42:35.870146735Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:42:35.872091 containerd[1486]: time="2025-02-13T19:42:35.872055993Z" level=info msg="CreateContainer within sandbox \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:42:35.888058 containerd[1486]: time="2025-02-13T19:42:35.888010320Z" level=info msg="CreateContainer within sandbox \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\"" Feb 13 19:42:35.888534 containerd[1486]: time="2025-02-13T19:42:35.888497927Z" level=info msg="StartContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\"" Feb 13 19:42:35.916595 systemd[1]: Started cri-containerd-fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76.scope - libcontainer container fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76. Feb 13 19:42:35.930651 kubelet[2594]: E0213 19:42:35.930526 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:35.934563 containerd[1486]: time="2025-02-13T19:42:35.934409778Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:42:35.953398 containerd[1486]: time="2025-02-13T19:42:35.953345319Z" level=info msg="StartContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" returns successfully" Feb 13 19:42:35.956986 containerd[1486]: time="2025-02-13T19:42:35.956938172Z" level=info msg="CreateContainer within sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\"" Feb 13 19:42:35.958005 containerd[1486]: time="2025-02-13T19:42:35.957971634Z" level=info msg="StartContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\"" Feb 13 19:42:35.986667 systemd[1]: Started cri-containerd-cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132.scope - libcontainer container cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132. Feb 13 19:42:36.026474 containerd[1486]: time="2025-02-13T19:42:36.026408426Z" level=info msg="StartContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" returns successfully" Feb 13 19:42:36.205005 kubelet[2594]: I0213 19:42:36.204685 2594 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:42:36.232358 systemd[1]: Created slice kubepods-burstable-pod106c17dc_ed2c_4e78_904a_09e85cc5d345.slice - libcontainer container kubepods-burstable-pod106c17dc_ed2c_4e78_904a_09e85cc5d345.slice. Feb 13 19:42:36.243098 systemd[1]: Created slice kubepods-burstable-pode0777e6a_5fa1_4369_b957_b36c92fecc09.slice - libcontainer container kubepods-burstable-pode0777e6a_5fa1_4369_b957_b36c92fecc09.slice. Feb 13 19:42:36.366711 kubelet[2594]: I0213 19:42:36.366649 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drl9p\" (UniqueName: \"kubernetes.io/projected/e0777e6a-5fa1-4369-b957-b36c92fecc09-kube-api-access-drl9p\") pod \"coredns-668d6bf9bc-cdnz7\" (UID: \"e0777e6a-5fa1-4369-b957-b36c92fecc09\") " pod="kube-system/coredns-668d6bf9bc-cdnz7" Feb 13 19:42:36.366711 kubelet[2594]: I0213 19:42:36.366695 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvr5\" (UniqueName: \"kubernetes.io/projected/106c17dc-ed2c-4e78-904a-09e85cc5d345-kube-api-access-chvr5\") pod \"coredns-668d6bf9bc-ns6dp\" (UID: \"106c17dc-ed2c-4e78-904a-09e85cc5d345\") " pod="kube-system/coredns-668d6bf9bc-ns6dp" Feb 13 19:42:36.366711 kubelet[2594]: I0213 19:42:36.366714 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/106c17dc-ed2c-4e78-904a-09e85cc5d345-config-volume\") pod \"coredns-668d6bf9bc-ns6dp\" (UID: \"106c17dc-ed2c-4e78-904a-09e85cc5d345\") " pod="kube-system/coredns-668d6bf9bc-ns6dp" Feb 13 19:42:36.366923 kubelet[2594]: I0213 19:42:36.366730 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0777e6a-5fa1-4369-b957-b36c92fecc09-config-volume\") pod \"coredns-668d6bf9bc-cdnz7\" (UID: \"e0777e6a-5fa1-4369-b957-b36c92fecc09\") " pod="kube-system/coredns-668d6bf9bc-cdnz7" Feb 13 19:42:36.537321 kubelet[2594]: E0213 19:42:36.537275 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.539222 containerd[1486]: time="2025-02-13T19:42:36.538717218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns6dp,Uid:106c17dc-ed2c-4e78-904a-09e85cc5d345,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:36.546634 kubelet[2594]: E0213 19:42:36.546587 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.547309 containerd[1486]: time="2025-02-13T19:42:36.547273007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdnz7,Uid:e0777e6a-5fa1-4369-b957-b36c92fecc09,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:36.933875 kubelet[2594]: E0213 19:42:36.932982 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.935994 kubelet[2594]: E0213 19:42:36.935968 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.958909 kubelet[2594]: I0213 19:42:36.958838 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9mjvr" podStartSLOduration=5.935616392 podStartE2EDuration="17.958809973s" podCreationTimestamp="2025-02-13 19:42:19 +0000 UTC" firstStartedPulling="2025-02-13 19:42:20.382657509 +0000 UTC m=+6.599942120" lastFinishedPulling="2025-02-13 19:42:32.40585109 +0000 UTC m=+18.623135701" observedRunningTime="2025-02-13 19:42:36.958159922 +0000 UTC m=+23.175444533" watchObservedRunningTime="2025-02-13 19:42:36.958809973 +0000 UTC m=+23.176094584" Feb 13 19:42:36.959084 kubelet[2594]: I0213 19:42:36.959047 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pt7vp" podStartSLOduration=1.589502218 podStartE2EDuration="16.959041288s" podCreationTimestamp="2025-02-13 19:42:20 +0000 UTC" firstStartedPulling="2025-02-13 19:42:20.501210709 +0000 UTC m=+6.718495320" lastFinishedPulling="2025-02-13 19:42:35.870749779 +0000 UTC m=+22.088034390" observedRunningTime="2025-02-13 19:42:36.943619236 +0000 UTC m=+23.160903857" watchObservedRunningTime="2025-02-13 19:42:36.959041288 +0000 UTC m=+23.176325899" Feb 13 19:42:37.937895 kubelet[2594]: E0213 19:42:37.937854 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:37.937895 kubelet[2594]: E0213 19:42:37.937902 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:38.939736 kubelet[2594]: E0213 19:42:38.939692 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:40.362103 systemd-networkd[1379]: cilium_host: Link UP Feb 13 19:42:40.362277 systemd-networkd[1379]: cilium_net: Link UP Feb 13 19:42:40.362872 systemd-networkd[1379]: cilium_net: Gained carrier Feb 13 19:42:40.363068 systemd-networkd[1379]: cilium_host: Gained carrier Feb 13 19:42:40.464053 systemd-networkd[1379]: cilium_vxlan: Link UP Feb 13 19:42:40.464063 systemd-networkd[1379]: cilium_vxlan: Gained carrier Feb 13 19:42:40.670560 systemd-networkd[1379]: cilium_net: Gained IPv6LL Feb 13 19:42:40.675468 kernel: NET: Registered PF_ALG protocol family Feb 13 19:42:41.310654 systemd-networkd[1379]: cilium_host: Gained IPv6LL Feb 13 19:42:41.348797 systemd-networkd[1379]: lxc_health: Link UP Feb 13 19:42:41.357588 systemd-networkd[1379]: lxc_health: Gained carrier Feb 13 19:42:41.470585 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:59786.service - OpenSSH per-connection server daemon (10.0.0.1:59786). Feb 13 19:42:41.515891 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 59786 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:41.517409 sshd-session[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:41.522477 systemd-logind[1469]: New session 8 of user core. Feb 13 19:42:41.530604 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:42:41.643457 systemd-networkd[1379]: lxc02ed68b11a99: Link UP Feb 13 19:42:41.658039 systemd-networkd[1379]: lxc92457460a571: Link UP Feb 13 19:42:41.663473 kernel: eth0: renamed from tmp39f38 Feb 13 19:42:41.669462 kernel: eth0: renamed from tmp52a75 Feb 13 19:42:41.677596 systemd-networkd[1379]: lxc02ed68b11a99: Gained carrier Feb 13 19:42:41.677811 systemd-networkd[1379]: lxc92457460a571: Gained carrier Feb 13 19:42:41.756404 sshd[3782]: Connection closed by 10.0.0.1 port 59786 Feb 13 19:42:41.757630 sshd-session[3780]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:41.761143 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:59786.service: Deactivated successfully. Feb 13 19:42:41.763235 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:42:41.766319 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:42:41.767880 systemd-logind[1469]: Removed session 8. Feb 13 19:42:41.886642 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL Feb 13 19:42:42.133597 kubelet[2594]: E0213 19:42:42.132801 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:42.946927 kubelet[2594]: E0213 19:42:42.946879 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:43.102629 systemd-networkd[1379]: lxc92457460a571: Gained IPv6LL Feb 13 19:42:43.230880 systemd-networkd[1379]: lxc_health: Gained IPv6LL Feb 13 19:42:43.294877 systemd-networkd[1379]: lxc02ed68b11a99: Gained IPv6LL Feb 13 19:42:43.949413 kubelet[2594]: E0213 19:42:43.949321 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:45.148569 containerd[1486]: time="2025-02-13T19:42:45.148115156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:45.148569 containerd[1486]: time="2025-02-13T19:42:45.148174818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:45.148569 containerd[1486]: time="2025-02-13T19:42:45.148185969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:45.148569 containerd[1486]: time="2025-02-13T19:42:45.148260560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:45.149321 containerd[1486]: time="2025-02-13T19:42:45.148019376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:45.149321 containerd[1486]: time="2025-02-13T19:42:45.148180599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:45.149321 containerd[1486]: time="2025-02-13T19:42:45.148204484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:45.149321 containerd[1486]: time="2025-02-13T19:42:45.148368612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:45.174553 systemd[1]: Started cri-containerd-39f389036e815bc459728a0caf0cefd28f1709d1774fa6015f3237a2b55f541b.scope - libcontainer container 39f389036e815bc459728a0caf0cefd28f1709d1774fa6015f3237a2b55f541b. Feb 13 19:42:45.175981 systemd[1]: Started cri-containerd-52a75c920b1b352026a7f181163684d6c5445dfdb20a028e03e8a21295c2eeec.scope - libcontainer container 52a75c920b1b352026a7f181163684d6c5445dfdb20a028e03e8a21295c2eeec. Feb 13 19:42:45.189352 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:45.191720 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:45.219345 containerd[1486]: time="2025-02-13T19:42:45.219196004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdnz7,Uid:e0777e6a-5fa1-4369-b957-b36c92fecc09,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f389036e815bc459728a0caf0cefd28f1709d1774fa6015f3237a2b55f541b\"" Feb 13 19:42:45.219345 containerd[1486]: time="2025-02-13T19:42:45.219314486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns6dp,Uid:106c17dc-ed2c-4e78-904a-09e85cc5d345,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a75c920b1b352026a7f181163684d6c5445dfdb20a028e03e8a21295c2eeec\"" Feb 13 19:42:45.220328 kubelet[2594]: E0213 19:42:45.220294 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:45.223359 kubelet[2594]: E0213 19:42:45.222194 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:45.225061 containerd[1486]: time="2025-02-13T19:42:45.224157399Z" level=info msg="CreateContainer within sandbox \"52a75c920b1b352026a7f181163684d6c5445dfdb20a028e03e8a21295c2eeec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:42:45.225061 containerd[1486]: time="2025-02-13T19:42:45.224445890Z" level=info msg="CreateContainer within sandbox \"39f389036e815bc459728a0caf0cefd28f1709d1774fa6015f3237a2b55f541b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:42:45.268125 containerd[1486]: time="2025-02-13T19:42:45.268071449Z" level=info msg="CreateContainer within sandbox \"52a75c920b1b352026a7f181163684d6c5445dfdb20a028e03e8a21295c2eeec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"803fced2b59bb2ef61e85162a4ae4b33cf8d1e4387ce885bea69fe00e752e486\"" Feb 13 19:42:45.268605 containerd[1486]: time="2025-02-13T19:42:45.268579022Z" level=info msg="StartContainer for \"803fced2b59bb2ef61e85162a4ae4b33cf8d1e4387ce885bea69fe00e752e486\"" Feb 13 19:42:45.274531 containerd[1486]: time="2025-02-13T19:42:45.274362231Z" level=info msg="CreateContainer within sandbox \"39f389036e815bc459728a0caf0cefd28f1709d1774fa6015f3237a2b55f541b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08eee83ac0cbde516e9aca5b59b628a65f1138bb67403af45f241127447b7178\"" Feb 13 19:42:45.274946 containerd[1486]: time="2025-02-13T19:42:45.274890493Z" level=info msg="StartContainer for \"08eee83ac0cbde516e9aca5b59b628a65f1138bb67403af45f241127447b7178\"" Feb 13 19:42:45.298653 systemd[1]: Started cri-containerd-803fced2b59bb2ef61e85162a4ae4b33cf8d1e4387ce885bea69fe00e752e486.scope - libcontainer container 803fced2b59bb2ef61e85162a4ae4b33cf8d1e4387ce885bea69fe00e752e486. Feb 13 19:42:45.314550 systemd[1]: Started cri-containerd-08eee83ac0cbde516e9aca5b59b628a65f1138bb67403af45f241127447b7178.scope - libcontainer container 08eee83ac0cbde516e9aca5b59b628a65f1138bb67403af45f241127447b7178. Feb 13 19:42:45.342994 containerd[1486]: time="2025-02-13T19:42:45.342841063Z" level=info msg="StartContainer for \"803fced2b59bb2ef61e85162a4ae4b33cf8d1e4387ce885bea69fe00e752e486\" returns successfully" Feb 13 19:42:45.342994 containerd[1486]: time="2025-02-13T19:42:45.342912316Z" level=info msg="StartContainer for \"08eee83ac0cbde516e9aca5b59b628a65f1138bb67403af45f241127447b7178\" returns successfully" Feb 13 19:42:45.954275 kubelet[2594]: E0213 19:42:45.954229 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:45.956663 kubelet[2594]: E0213 19:42:45.956574 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:46.007670 kubelet[2594]: I0213 19:42:46.007610 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cdnz7" podStartSLOduration=26.007587129 podStartE2EDuration="26.007587129s" podCreationTimestamp="2025-02-13 19:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:45.999041796 +0000 UTC m=+32.216326407" watchObservedRunningTime="2025-02-13 19:42:46.007587129 +0000 UTC m=+32.224871740" Feb 13 19:42:46.018820 kubelet[2594]: I0213 19:42:46.018737 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ns6dp" podStartSLOduration=26.018716333 podStartE2EDuration="26.018716333s" podCreationTimestamp="2025-02-13 19:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:46.008057332 +0000 UTC m=+32.225341943" watchObservedRunningTime="2025-02-13 19:42:46.018716333 +0000 UTC m=+32.236000945" Feb 13 19:42:46.153557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583922632.mount: Deactivated successfully. Feb 13 19:42:46.769888 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Feb 13 19:42:46.817308 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:46.819089 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:46.823222 systemd-logind[1469]: New session 9 of user core. Feb 13 19:42:46.829552 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:42:46.959789 kubelet[2594]: E0213 19:42:46.959561 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:46.959789 kubelet[2594]: E0213 19:42:46.959700 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:46.963270 sshd[3997]: Connection closed by 10.0.0.1 port 33130 Feb 13 19:42:46.963635 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:46.967146 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:33130.service: Deactivated successfully. Feb 13 19:42:46.968922 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:42:46.969505 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:42:46.970296 systemd-logind[1469]: Removed session 9. Feb 13 19:42:47.961256 kubelet[2594]: E0213 19:42:47.961226 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:47.961256 kubelet[2594]: E0213 19:42:47.961251 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:51.975033 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:33140.service - OpenSSH per-connection server daemon (10.0.0.1:33140). Feb 13 19:42:52.015019 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 33140 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:52.016759 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:52.021097 systemd-logind[1469]: New session 10 of user core. Feb 13 19:42:52.031630 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:42:52.147694 sshd[4014]: Connection closed by 10.0.0.1 port 33140 Feb 13 19:42:52.148042 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:52.152119 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:33140.service: Deactivated successfully. Feb 13 19:42:52.154334 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:42:52.155010 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:42:52.156018 systemd-logind[1469]: Removed session 10. Feb 13 19:42:57.159676 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:38934.service - OpenSSH per-connection server daemon (10.0.0.1:38934). Feb 13 19:42:57.205933 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 38934 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:57.207921 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:57.212490 systemd-logind[1469]: New session 11 of user core. Feb 13 19:42:57.228728 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:42:57.427491 sshd[4031]: Connection closed by 10.0.0.1 port 38934 Feb 13 19:42:57.427965 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:57.432003 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:38934.service: Deactivated successfully. Feb 13 19:42:57.434251 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:42:57.434948 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:42:57.435918 systemd-logind[1469]: Removed session 11. Feb 13 19:43:02.440041 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Feb 13 19:43:02.483153 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:02.484771 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:02.488655 systemd-logind[1469]: New session 12 of user core. Feb 13 19:43:02.499562 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:43:02.616489 sshd[4046]: Connection closed by 10.0.0.1 port 38950 Feb 13 19:43:02.616839 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:02.627985 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:38950.service: Deactivated successfully. Feb 13 19:43:02.630037 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:43:02.631473 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:43:02.640778 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:38962.service - OpenSSH per-connection server daemon (10.0.0.1:38962). Feb 13 19:43:02.641708 systemd-logind[1469]: Removed session 12. Feb 13 19:43:02.676788 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 38962 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:02.678371 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:02.682596 systemd-logind[1469]: New session 13 of user core. Feb 13 19:43:02.697558 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:43:02.857928 sshd[4061]: Connection closed by 10.0.0.1 port 38962 Feb 13 19:43:02.858630 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:02.869842 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:38962.service: Deactivated successfully. Feb 13 19:43:02.874739 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:43:02.878643 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:43:02.893100 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:38972.service - OpenSSH per-connection server daemon (10.0.0.1:38972). Feb 13 19:43:02.895679 systemd-logind[1469]: Removed session 13. Feb 13 19:43:02.944205 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 38972 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:02.946030 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:02.951178 systemd-logind[1469]: New session 14 of user core. Feb 13 19:43:02.960668 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:43:03.095396 sshd[4074]: Connection closed by 10.0.0.1 port 38972 Feb 13 19:43:03.096062 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:03.100967 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:38972.service: Deactivated successfully. Feb 13 19:43:03.103006 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:43:03.103672 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:43:03.104702 systemd-logind[1469]: Removed session 14. Feb 13 19:43:08.108528 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:33920.service - OpenSSH per-connection server daemon (10.0.0.1:33920). Feb 13 19:43:08.149298 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 33920 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:08.151167 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:08.155289 systemd-logind[1469]: New session 15 of user core. Feb 13 19:43:08.163571 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:43:08.277261 sshd[4090]: Connection closed by 10.0.0.1 port 33920 Feb 13 19:43:08.277620 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:08.281963 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:33920.service: Deactivated successfully. Feb 13 19:43:08.284081 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:43:08.284755 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:43:08.285776 systemd-logind[1469]: Removed session 15. Feb 13 19:43:13.294533 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:33930.service - OpenSSH per-connection server daemon (10.0.0.1:33930). Feb 13 19:43:13.337628 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 33930 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:13.339441 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:13.343789 systemd-logind[1469]: New session 16 of user core. Feb 13 19:43:13.352608 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:43:13.466217 sshd[4104]: Connection closed by 10.0.0.1 port 33930 Feb 13 19:43:13.466872 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:13.477669 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:33930.service: Deactivated successfully. Feb 13 19:43:13.479799 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:43:13.481786 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:43:13.492759 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:33940.service - OpenSSH per-connection server daemon (10.0.0.1:33940). Feb 13 19:43:13.494089 systemd-logind[1469]: Removed session 16. Feb 13 19:43:13.529374 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 33940 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:13.531199 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:13.535890 systemd-logind[1469]: New session 17 of user core. Feb 13 19:43:13.550548 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:43:13.739705 sshd[4118]: Connection closed by 10.0.0.1 port 33940 Feb 13 19:43:13.740230 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:13.751890 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:33940.service: Deactivated successfully. Feb 13 19:43:13.754113 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:43:13.755840 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:43:13.764754 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944). Feb 13 19:43:13.765760 systemd-logind[1469]: Removed session 17. Feb 13 19:43:13.808626 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:13.810167 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:13.814468 systemd-logind[1469]: New session 18 of user core. Feb 13 19:43:13.826637 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:43:14.581525 sshd[4131]: Connection closed by 10.0.0.1 port 33944 Feb 13 19:43:14.581962 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:14.597623 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:33944.service: Deactivated successfully. Feb 13 19:43:14.600374 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:43:14.603317 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:43:14.612755 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Feb 13 19:43:14.613858 systemd-logind[1469]: Removed session 18. Feb 13 19:43:14.649901 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:14.651341 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:14.654986 systemd-logind[1469]: New session 19 of user core. Feb 13 19:43:14.664558 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:43:14.907689 sshd[4155]: Connection closed by 10.0.0.1 port 34088 Feb 13 19:43:14.908253 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:14.919159 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:34088.service: Deactivated successfully. Feb 13 19:43:14.920796 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:43:14.922443 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:43:14.923665 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:34098.service - OpenSSH per-connection server daemon (10.0.0.1:34098). Feb 13 19:43:14.924455 systemd-logind[1469]: Removed session 19. Feb 13 19:43:14.963286 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 34098 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:14.964924 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:14.968678 systemd-logind[1469]: New session 20 of user core. Feb 13 19:43:14.978542 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:43:15.097726 sshd[4168]: Connection closed by 10.0.0.1 port 34098 Feb 13 19:43:15.098100 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:15.102404 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:34098.service: Deactivated successfully. Feb 13 19:43:15.104461 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:43:15.105197 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:43:15.106115 systemd-logind[1469]: Removed session 20. Feb 13 19:43:20.109077 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:34114.service - OpenSSH per-connection server daemon (10.0.0.1:34114). Feb 13 19:43:20.147937 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 34114 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:20.149327 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:20.153173 systemd-logind[1469]: New session 21 of user core. Feb 13 19:43:20.162549 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:43:20.274820 sshd[4183]: Connection closed by 10.0.0.1 port 34114 Feb 13 19:43:20.275183 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:20.279466 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:34114.service: Deactivated successfully. Feb 13 19:43:20.281620 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:43:20.282232 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:43:20.283046 systemd-logind[1469]: Removed session 21. Feb 13 19:43:23.861547 kubelet[2594]: E0213 19:43:23.861497 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:25.289618 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:45190.service - OpenSSH per-connection server daemon (10.0.0.1:45190). Feb 13 19:43:25.329225 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 45190 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:25.330839 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:25.335118 systemd-logind[1469]: New session 22 of user core. Feb 13 19:43:25.342582 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:43:25.458507 sshd[4203]: Connection closed by 10.0.0.1 port 45190 Feb 13 19:43:25.458864 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:25.463229 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:45190.service: Deactivated successfully. Feb 13 19:43:25.465844 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:43:25.466957 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:43:25.468013 systemd-logind[1469]: Removed session 22. Feb 13 19:43:28.861777 kubelet[2594]: E0213 19:43:28.861733 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:30.470509 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:45200.service - OpenSSH per-connection server daemon (10.0.0.1:45200). Feb 13 19:43:30.510123 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 45200 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:30.511653 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:30.515616 systemd-logind[1469]: New session 23 of user core. Feb 13 19:43:30.524562 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:43:30.628114 sshd[4217]: Connection closed by 10.0.0.1 port 45200 Feb 13 19:43:30.628501 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:30.632802 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:45200.service: Deactivated successfully. Feb 13 19:43:30.634681 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:43:30.635379 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:43:30.636279 systemd-logind[1469]: Removed session 23. Feb 13 19:43:34.861564 kubelet[2594]: E0213 19:43:34.861500 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:35.641268 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:52342.service - OpenSSH per-connection server daemon (10.0.0.1:52342). Feb 13 19:43:35.680535 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 52342 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:35.682120 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:35.686261 systemd-logind[1469]: New session 24 of user core. Feb 13 19:43:35.693587 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:43:35.806469 sshd[4231]: Connection closed by 10.0.0.1 port 52342 Feb 13 19:43:35.806857 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:35.820529 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:52342.service: Deactivated successfully. Feb 13 19:43:35.822480 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:43:35.824255 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:43:35.831812 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:52354.service - OpenSSH per-connection server daemon (10.0.0.1:52354). Feb 13 19:43:35.832917 systemd-logind[1469]: Removed session 24. Feb 13 19:43:35.867335 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 52354 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:35.868946 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:35.873094 systemd-logind[1469]: New session 25 of user core. Feb 13 19:43:35.884744 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:43:37.214265 containerd[1486]: time="2025-02-13T19:43:37.214217677Z" level=info msg="StopContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" with timeout 30 (s)" Feb 13 19:43:37.219758 containerd[1486]: time="2025-02-13T19:43:37.219694376Z" level=info msg="Stop container \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" with signal terminated" Feb 13 19:43:37.233282 systemd[1]: cri-containerd-fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76.scope: Deactivated successfully. Feb 13 19:43:37.247710 containerd[1486]: time="2025-02-13T19:43:37.247659678Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:43:37.250846 containerd[1486]: time="2025-02-13T19:43:37.250774958Z" level=info msg="StopContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" with timeout 2 (s)" Feb 13 19:43:37.251172 containerd[1486]: time="2025-02-13T19:43:37.251021566Z" level=info msg="Stop container \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" with signal terminated" Feb 13 19:43:37.252924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76-rootfs.mount: Deactivated successfully. Feb 13 19:43:37.257717 systemd-networkd[1379]: lxc_health: Link DOWN Feb 13 19:43:37.257725 systemd-networkd[1379]: lxc_health: Lost carrier Feb 13 19:43:37.262174 containerd[1486]: time="2025-02-13T19:43:37.261953083Z" level=info msg="shim disconnected" id=fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76 namespace=k8s.io Feb 13 19:43:37.262174 containerd[1486]: time="2025-02-13T19:43:37.262016794Z" level=warning msg="cleaning up after shim disconnected" id=fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76 namespace=k8s.io Feb 13 19:43:37.262174 containerd[1486]: time="2025-02-13T19:43:37.262027364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:37.280139 containerd[1486]: time="2025-02-13T19:43:37.280097134Z" level=info msg="StopContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" returns successfully" Feb 13 19:43:37.283899 containerd[1486]: time="2025-02-13T19:43:37.283871564Z" level=info msg="StopPodSandbox for \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\"" Feb 13 19:43:37.284594 systemd[1]: cri-containerd-cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132.scope: Deactivated successfully. Feb 13 19:43:37.284992 systemd[1]: cri-containerd-cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132.scope: Consumed 6.772s CPU time. Feb 13 19:43:37.293477 containerd[1486]: time="2025-02-13T19:43:37.283902512Z" level=info msg="Container to stop \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.295416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1-shm.mount: Deactivated successfully. Feb 13 19:43:37.300696 systemd[1]: cri-containerd-5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1.scope: Deactivated successfully. Feb 13 19:43:37.307944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132-rootfs.mount: Deactivated successfully. Feb 13 19:43:37.316049 containerd[1486]: time="2025-02-13T19:43:37.315992308Z" level=info msg="shim disconnected" id=cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132 namespace=k8s.io Feb 13 19:43:37.316049 containerd[1486]: time="2025-02-13T19:43:37.316040750Z" level=warning msg="cleaning up after shim disconnected" id=cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132 namespace=k8s.io Feb 13 19:43:37.316049 containerd[1486]: time="2025-02-13T19:43:37.316050979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:37.323602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1-rootfs.mount: Deactivated successfully. Feb 13 19:43:37.329762 containerd[1486]: time="2025-02-13T19:43:37.329688450Z" level=info msg="shim disconnected" id=5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1 namespace=k8s.io Feb 13 19:43:37.329762 containerd[1486]: time="2025-02-13T19:43:37.329752943Z" level=warning msg="cleaning up after shim disconnected" id=5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1 namespace=k8s.io Feb 13 19:43:37.329762 containerd[1486]: time="2025-02-13T19:43:37.329761379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:37.334604 containerd[1486]: time="2025-02-13T19:43:37.334568408Z" level=info msg="StopContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" returns successfully" Feb 13 19:43:37.335177 containerd[1486]: time="2025-02-13T19:43:37.335158617Z" level=info msg="StopPodSandbox for \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\"" Feb 13 19:43:37.335224 containerd[1486]: time="2025-02-13T19:43:37.335194215Z" level=info msg="Container to stop \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.335224 containerd[1486]: time="2025-02-13T19:43:37.335205466Z" level=info msg="Container to stop \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.335224 containerd[1486]: time="2025-02-13T19:43:37.335216528Z" level=info msg="Container to stop \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.335314 containerd[1486]: time="2025-02-13T19:43:37.335225465Z" level=info msg="Container to stop \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.335314 containerd[1486]: time="2025-02-13T19:43:37.335234131Z" level=info msg="Container to stop \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:37.341048 systemd[1]: cri-containerd-6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab.scope: Deactivated successfully. Feb 13 19:43:37.343276 containerd[1486]: time="2025-02-13T19:43:37.343239888Z" level=info msg="TearDown network for sandbox \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\" successfully" Feb 13 19:43:37.343276 containerd[1486]: time="2025-02-13T19:43:37.343267160Z" level=info msg="StopPodSandbox for \"5985f647a01b47e38a9a9770395128e576a430cc076aac5f36ac437e3ca642e1\" returns successfully" Feb 13 19:43:37.372897 containerd[1486]: time="2025-02-13T19:43:37.372839931Z" level=info msg="shim disconnected" id=6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab namespace=k8s.io Feb 13 19:43:37.372897 containerd[1486]: time="2025-02-13T19:43:37.372896337Z" level=warning msg="cleaning up after shim disconnected" id=6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab namespace=k8s.io Feb 13 19:43:37.373109 containerd[1486]: time="2025-02-13T19:43:37.372904364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:37.387385 containerd[1486]: time="2025-02-13T19:43:37.387269875Z" level=info msg="TearDown network for sandbox \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" successfully" Feb 13 19:43:37.387385 containerd[1486]: time="2025-02-13T19:43:37.387301426Z" level=info msg="StopPodSandbox for \"6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab\" returns successfully" Feb 13 19:43:37.456196 kubelet[2594]: I0213 19:43:37.456131 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d23d8d0a-508e-4f4d-aaf8-6612560985c2-cilium-config-path\") pod \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\" (UID: \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\") " Feb 13 19:43:37.456196 kubelet[2594]: I0213 19:43:37.456186 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glpfq\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-kube-api-access-glpfq\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456196 kubelet[2594]: I0213 19:43:37.456206 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5pqp\" (UniqueName: \"kubernetes.io/projected/d23d8d0a-508e-4f4d-aaf8-6612560985c2-kube-api-access-w5pqp\") pod \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\" (UID: \"d23d8d0a-508e-4f4d-aaf8-6612560985c2\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456224 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-net\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456241 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hubble-tls\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456263 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hostproc\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456278 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a5ac7b-add9-4b57-a754-d102b5796ea9-clustermesh-secrets\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456291 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cni-path\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456753 kubelet[2594]: I0213 19:43:37.456306 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-config-path\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456321 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-xtables-lock\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456333 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-run\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456346 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-cgroup\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456362 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-kernel\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456381 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-lib-modules\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.456901 kubelet[2594]: I0213 19:43:37.456398 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-etc-cni-netd\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.457045 kubelet[2594]: I0213 19:43:37.456412 2594 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-bpf-maps\") pod \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\" (UID: \"e9a5ac7b-add9-4b57-a754-d102b5796ea9\") " Feb 13 19:43:37.457045 kubelet[2594]: I0213 19:43:37.456512 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.459536 kubelet[2594]: I0213 19:43:37.459494 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23d8d0a-508e-4f4d-aaf8-6612560985c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d23d8d0a-508e-4f4d-aaf8-6612560985c2" (UID: "d23d8d0a-508e-4f4d-aaf8-6612560985c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:43:37.459567 kubelet[2594]: I0213 19:43:37.459552 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.459600 kubelet[2594]: I0213 19:43:37.459570 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.460077 kubelet[2594]: I0213 19:43:37.460037 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:43:37.462588 kubelet[2594]: I0213 19:43:37.462401 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-kube-api-access-glpfq" (OuterVolumeSpecName: "kube-api-access-glpfq") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "kube-api-access-glpfq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:43:37.462588 kubelet[2594]: I0213 19:43:37.462467 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:43:37.462588 kubelet[2594]: I0213 19:43:37.462498 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462588 kubelet[2594]: I0213 19:43:37.462496 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462588 kubelet[2594]: I0213 19:43:37.462513 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462819 kubelet[2594]: I0213 19:43:37.462522 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462819 kubelet[2594]: I0213 19:43:37.462529 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462819 kubelet[2594]: I0213 19:43:37.462542 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462819 kubelet[2594]: I0213 19:43:37.462544 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:37.462819 kubelet[2594]: I0213 19:43:37.462657 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a5ac7b-add9-4b57-a754-d102b5796ea9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9a5ac7b-add9-4b57-a754-d102b5796ea9" (UID: "e9a5ac7b-add9-4b57-a754-d102b5796ea9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:43:37.462945 kubelet[2594]: I0213 19:43:37.462673 2594 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23d8d0a-508e-4f4d-aaf8-6612560985c2-kube-api-access-w5pqp" (OuterVolumeSpecName: "kube-api-access-w5pqp") pod "d23d8d0a-508e-4f4d-aaf8-6612560985c2" (UID: "d23d8d0a-508e-4f4d-aaf8-6612560985c2"). InnerVolumeSpecName "kube-api-access-w5pqp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557044 2594 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9a5ac7b-add9-4b57-a754-d102b5796ea9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557079 2594 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557091 2594 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557104 2594 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557115 2594 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557097 kubelet[2594]: I0213 19:43:37.557125 2594 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557135 2594 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557146 2594 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557153 2594 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557161 2594 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557169 2594 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557176 2594 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d23d8d0a-508e-4f4d-aaf8-6612560985c2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557184 2594 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-glpfq\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-kube-api-access-glpfq\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557582 kubelet[2594]: I0213 19:43:37.557192 2594 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5pqp\" (UniqueName: \"kubernetes.io/projected/d23d8d0a-508e-4f4d-aaf8-6612560985c2-kube-api-access-w5pqp\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557826 kubelet[2594]: I0213 19:43:37.557199 2594 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9a5ac7b-add9-4b57-a754-d102b5796ea9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.557826 kubelet[2594]: I0213 19:43:37.557207 2594 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9a5ac7b-add9-4b57-a754-d102b5796ea9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:43:37.868385 systemd[1]: Removed slice kubepods-burstable-pode9a5ac7b_add9_4b57_a754_d102b5796ea9.slice - libcontainer container kubepods-burstable-pode9a5ac7b_add9_4b57_a754_d102b5796ea9.slice. Feb 13 19:43:37.868491 systemd[1]: kubepods-burstable-pode9a5ac7b_add9_4b57_a754_d102b5796ea9.slice: Consumed 6.876s CPU time. Feb 13 19:43:37.870078 systemd[1]: Removed slice kubepods-besteffort-podd23d8d0a_508e_4f4d_aaf8_6612560985c2.slice - libcontainer container kubepods-besteffort-podd23d8d0a_508e_4f4d_aaf8_6612560985c2.slice. Feb 13 19:43:38.065213 kubelet[2594]: I0213 19:43:38.065176 2594 scope.go:117] "RemoveContainer" containerID="fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76" Feb 13 19:43:38.073416 containerd[1486]: time="2025-02-13T19:43:38.073378863Z" level=info msg="RemoveContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\"" Feb 13 19:43:38.082076 containerd[1486]: time="2025-02-13T19:43:38.082019009Z" level=info msg="RemoveContainer for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" returns successfully" Feb 13 19:43:38.082479 kubelet[2594]: I0213 19:43:38.082453 2594 scope.go:117] "RemoveContainer" containerID="fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76" Feb 13 19:43:38.082776 containerd[1486]: time="2025-02-13T19:43:38.082682747Z" level=error msg="ContainerStatus for \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\": not found" Feb 13 19:43:38.089694 kubelet[2594]: E0213 19:43:38.089650 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\": not found" containerID="fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76" Feb 13 19:43:38.089856 kubelet[2594]: I0213 19:43:38.089698 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76"} err="failed to get container status \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc700f27a99c0d9f134733e138b879fb88ff6e01940493a64caa52eee69c3a76\": not found" Feb 13 19:43:38.089856 kubelet[2594]: I0213 19:43:38.089767 2594 scope.go:117] "RemoveContainer" containerID="cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132" Feb 13 19:43:38.091138 containerd[1486]: time="2025-02-13T19:43:38.091093239Z" level=info msg="RemoveContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\"" Feb 13 19:43:38.096753 containerd[1486]: time="2025-02-13T19:43:38.096714319Z" level=info msg="RemoveContainer for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" returns successfully" Feb 13 19:43:38.096950 kubelet[2594]: I0213 19:43:38.096914 2594 scope.go:117] "RemoveContainer" containerID="0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56" Feb 13 19:43:38.102128 containerd[1486]: time="2025-02-13T19:43:38.102084464Z" level=info msg="RemoveContainer for \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\"" Feb 13 19:43:38.117989 containerd[1486]: time="2025-02-13T19:43:38.117943742Z" level=info msg="RemoveContainer for \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\" returns successfully" Feb 13 19:43:38.118249 kubelet[2594]: I0213 19:43:38.118224 2594 scope.go:117] "RemoveContainer" containerID="11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891" Feb 13 19:43:38.119488 containerd[1486]: time="2025-02-13T19:43:38.119358195Z" level=info msg="RemoveContainer for \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\"" Feb 13 19:43:38.122864 containerd[1486]: time="2025-02-13T19:43:38.122827445Z" level=info msg="RemoveContainer for \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\" returns successfully" Feb 13 19:43:38.123092 kubelet[2594]: I0213 19:43:38.123045 2594 scope.go:117] "RemoveContainer" containerID="98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14" Feb 13 19:43:38.124054 containerd[1486]: time="2025-02-13T19:43:38.124023182Z" level=info msg="RemoveContainer for \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\"" Feb 13 19:43:38.127840 containerd[1486]: time="2025-02-13T19:43:38.127804434Z" level=info msg="RemoveContainer for \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\" returns successfully" Feb 13 19:43:38.128022 kubelet[2594]: I0213 19:43:38.127997 2594 scope.go:117] "RemoveContainer" containerID="0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20" Feb 13 19:43:38.128863 containerd[1486]: time="2025-02-13T19:43:38.128823236Z" level=info msg="RemoveContainer for \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\"" Feb 13 19:43:38.132472 containerd[1486]: time="2025-02-13T19:43:38.132437180Z" level=info msg="RemoveContainer for \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\" returns successfully" Feb 13 19:43:38.132624 kubelet[2594]: I0213 19:43:38.132593 2594 scope.go:117] "RemoveContainer" containerID="cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132" Feb 13 19:43:38.132816 containerd[1486]: time="2025-02-13T19:43:38.132781072Z" level=error msg="ContainerStatus for \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\": not found" Feb 13 19:43:38.132991 kubelet[2594]: E0213 19:43:38.132963 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\": not found" containerID="cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132" Feb 13 19:43:38.133162 kubelet[2594]: I0213 19:43:38.132994 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132"} err="failed to get container status \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdc243b0287cfde7bef9ba9aef1dad38bb86ac72a70730f3fc943dede3447132\": not found" Feb 13 19:43:38.133162 kubelet[2594]: I0213 19:43:38.133016 2594 scope.go:117] "RemoveContainer" containerID="0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56" Feb 13 19:43:38.133246 containerd[1486]: time="2025-02-13T19:43:38.133202080Z" level=error msg="ContainerStatus for \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\": not found" Feb 13 19:43:38.133367 kubelet[2594]: E0213 19:43:38.133344 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\": not found" containerID="0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56" Feb 13 19:43:38.133434 kubelet[2594]: I0213 19:43:38.133365 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56"} err="failed to get container status \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fdcc9a46aec3feb0150d12958c4264dbe35eac4b8fb0cc99fb447e785054d56\": not found" Feb 13 19:43:38.133434 kubelet[2594]: I0213 19:43:38.133379 2594 scope.go:117] "RemoveContainer" containerID="11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891" Feb 13 19:43:38.133562 containerd[1486]: time="2025-02-13T19:43:38.133536024Z" level=error msg="ContainerStatus for \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\": not found" Feb 13 19:43:38.133659 kubelet[2594]: E0213 19:43:38.133637 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\": not found" containerID="11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891" Feb 13 19:43:38.133697 kubelet[2594]: I0213 19:43:38.133663 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891"} err="failed to get container status \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\": rpc error: code = NotFound desc = an error occurred when try to find container \"11e4bd7a24fed581482c00c9289c97c88ec232a6885fd6714ad653f7fbf73891\": not found" Feb 13 19:43:38.133697 kubelet[2594]: I0213 19:43:38.133682 2594 scope.go:117] "RemoveContainer" containerID="98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14" Feb 13 19:43:38.133860 containerd[1486]: time="2025-02-13T19:43:38.133825162Z" level=error msg="ContainerStatus for \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\": not found" Feb 13 19:43:38.133949 kubelet[2594]: E0213 19:43:38.133929 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\": not found" containerID="98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14" Feb 13 19:43:38.133993 kubelet[2594]: I0213 19:43:38.133947 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14"} err="failed to get container status \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\": rpc error: code = NotFound desc = an error occurred when try to find container \"98d44486da2863f06e7a597c23889b5c0082d4fa0a329669c1b50cd5b7634b14\": not found" Feb 13 19:43:38.133993 kubelet[2594]: I0213 19:43:38.133961 2594 scope.go:117] "RemoveContainer" containerID="0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20" Feb 13 19:43:38.134246 containerd[1486]: time="2025-02-13T19:43:38.134104592Z" level=error msg="ContainerStatus for \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\": not found" Feb 13 19:43:38.134301 kubelet[2594]: E0213 19:43:38.134200 2594 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\": not found" containerID="0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20" Feb 13 19:43:38.134301 kubelet[2594]: I0213 19:43:38.134219 2594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20"} err="failed to get container status \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\": rpc error: code = NotFound desc = an error occurred when try to find container \"0aa8d4902cd6b89460cea64397160e470aff82d33f217ee426396c9392290f20\": not found" Feb 13 19:43:38.226755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab-rootfs.mount: Deactivated successfully. Feb 13 19:43:38.226881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d0a495675ab64d5fc07209a1bab4c4ad7f61c9987849c0cc8affb3c11a4c2ab-shm.mount: Deactivated successfully. Feb 13 19:43:38.226973 systemd[1]: var-lib-kubelet-pods-d23d8d0a\x2d508e\x2d4f4d\x2daaf8\x2d6612560985c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5pqp.mount: Deactivated successfully. Feb 13 19:43:38.227059 systemd[1]: var-lib-kubelet-pods-e9a5ac7b\x2dadd9\x2d4b57\x2da754\x2dd102b5796ea9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dglpfq.mount: Deactivated successfully. Feb 13 19:43:38.227151 systemd[1]: var-lib-kubelet-pods-e9a5ac7b\x2dadd9\x2d4b57\x2da754\x2dd102b5796ea9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:43:38.227246 systemd[1]: var-lib-kubelet-pods-e9a5ac7b\x2dadd9\x2d4b57\x2da754\x2dd102b5796ea9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:43:38.929786 kubelet[2594]: E0213 19:43:38.929743 2594 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:43:39.175833 sshd[4245]: Connection closed by 10.0.0.1 port 52354 Feb 13 19:43:39.176392 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:39.188415 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:52354.service: Deactivated successfully. Feb 13 19:43:39.190485 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:43:39.192229 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:43:39.206766 systemd[1]: Started sshd@25-10.0.0.105:22-10.0.0.1:52366.service - OpenSSH per-connection server daemon (10.0.0.1:52366). Feb 13 19:43:39.207624 systemd-logind[1469]: Removed session 25. Feb 13 19:43:39.243397 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 52366 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:39.244637 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:39.248463 systemd-logind[1469]: New session 26 of user core. Feb 13 19:43:39.258553 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:43:39.813113 sshd[4408]: Connection closed by 10.0.0.1 port 52366 Feb 13 19:43:39.812928 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:39.822345 systemd[1]: sshd@25-10.0.0.105:22-10.0.0.1:52366.service: Deactivated successfully. Feb 13 19:43:39.824688 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:43:39.827747 kubelet[2594]: I0213 19:43:39.827553 2594 memory_manager.go:355] "RemoveStaleState removing state" podUID="d23d8d0a-508e-4f4d-aaf8-6612560985c2" containerName="cilium-operator" Feb 13 19:43:39.827747 kubelet[2594]: I0213 19:43:39.827575 2594 memory_manager.go:355] "RemoveStaleState removing state" podUID="e9a5ac7b-add9-4b57-a754-d102b5796ea9" containerName="cilium-agent" Feb 13 19:43:39.830804 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:43:39.839827 systemd[1]: Started sshd@26-10.0.0.105:22-10.0.0.1:52382.service - OpenSSH per-connection server daemon (10.0.0.1:52382). Feb 13 19:43:39.845451 systemd-logind[1469]: Removed session 26. Feb 13 19:43:39.853570 systemd[1]: Created slice kubepods-burstable-pod068045fc_2b71_405a_b691_76df5fd2da55.slice - libcontainer container kubepods-burstable-pod068045fc_2b71_405a_b691_76df5fd2da55.slice. Feb 13 19:43:39.864844 kubelet[2594]: I0213 19:43:39.864445 2594 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23d8d0a-508e-4f4d-aaf8-6612560985c2" path="/var/lib/kubelet/pods/d23d8d0a-508e-4f4d-aaf8-6612560985c2/volumes" Feb 13 19:43:39.866443 kubelet[2594]: I0213 19:43:39.865020 2594 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a5ac7b-add9-4b57-a754-d102b5796ea9" path="/var/lib/kubelet/pods/e9a5ac7b-add9-4b57-a754-d102b5796ea9/volumes" Feb 13 19:43:39.868213 kubelet[2594]: I0213 19:43:39.868176 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-host-proc-sys-kernel\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868213 kubelet[2594]: I0213 19:43:39.868213 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-cilium-run\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868297 kubelet[2594]: I0213 19:43:39.868228 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-bpf-maps\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868297 kubelet[2594]: I0213 19:43:39.868242 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-hostproc\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868297 kubelet[2594]: I0213 19:43:39.868257 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-cilium-cgroup\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868297 kubelet[2594]: I0213 19:43:39.868270 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-xtables-lock\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868297 kubelet[2594]: I0213 19:43:39.868284 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/068045fc-2b71-405a-b691-76df5fd2da55-cilium-ipsec-secrets\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868300 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-etc-cni-netd\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868315 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/068045fc-2b71-405a-b691-76df5fd2da55-hubble-tls\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868330 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/068045fc-2b71-405a-b691-76df5fd2da55-cilium-config-path\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868344 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-host-proc-sys-net\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868360 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-cni-path\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868408 kubelet[2594]: I0213 19:43:39.868374 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/068045fc-2b71-405a-b691-76df5fd2da55-clustermesh-secrets\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868564 kubelet[2594]: I0213 19:43:39.868387 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/068045fc-2b71-405a-b691-76df5fd2da55-lib-modules\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.868564 kubelet[2594]: I0213 19:43:39.868402 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-499lh\" (UniqueName: \"kubernetes.io/projected/068045fc-2b71-405a-b691-76df5fd2da55-kube-api-access-499lh\") pod \"cilium-gdhtw\" (UID: \"068045fc-2b71-405a-b691-76df5fd2da55\") " pod="kube-system/cilium-gdhtw" Feb 13 19:43:39.886931 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 52382 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:39.888529 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:39.895906 systemd-logind[1469]: New session 27 of user core. Feb 13 19:43:39.908595 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:43:39.959044 sshd[4421]: Connection closed by 10.0.0.1 port 52382 Feb 13 19:43:39.959597 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:39.988214 systemd[1]: sshd@26-10.0.0.105:22-10.0.0.1:52382.service: Deactivated successfully. Feb 13 19:43:39.990372 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:43:39.992569 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:43:40.000725 systemd[1]: Started sshd@27-10.0.0.105:22-10.0.0.1:52394.service - OpenSSH per-connection server daemon (10.0.0.1:52394). Feb 13 19:43:40.001615 systemd-logind[1469]: Removed session 27. Feb 13 19:43:40.035096 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 52394 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:40.036774 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:40.040576 systemd-logind[1469]: New session 28 of user core. Feb 13 19:43:40.049529 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:43:40.159916 kubelet[2594]: E0213 19:43:40.159784 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:40.160453 containerd[1486]: time="2025-02-13T19:43:40.160361105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdhtw,Uid:068045fc-2b71-405a-b691-76df5fd2da55,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:40.189799 containerd[1486]: time="2025-02-13T19:43:40.189717012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:40.189799 containerd[1486]: time="2025-02-13T19:43:40.189760334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:40.189799 containerd[1486]: time="2025-02-13T19:43:40.189770293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:40.189965 containerd[1486]: time="2025-02-13T19:43:40.189841128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:40.217701 systemd[1]: Started cri-containerd-0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194.scope - libcontainer container 0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194. Feb 13 19:43:40.240205 containerd[1486]: time="2025-02-13T19:43:40.240164141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdhtw,Uid:068045fc-2b71-405a-b691-76df5fd2da55,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\"" Feb 13 19:43:40.241110 kubelet[2594]: E0213 19:43:40.241074 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:40.242876 containerd[1486]: time="2025-02-13T19:43:40.242843850Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:43:40.269567 containerd[1486]: time="2025-02-13T19:43:40.258639672Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff\"" Feb 13 19:43:40.269932 containerd[1486]: time="2025-02-13T19:43:40.269902993Z" level=info msg="StartContainer for \"d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff\"" Feb 13 19:43:40.295550 systemd[1]: Started cri-containerd-d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff.scope - libcontainer container d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff. Feb 13 19:43:40.319965 containerd[1486]: time="2025-02-13T19:43:40.319857779Z" level=info msg="StartContainer for \"d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff\" returns successfully" Feb 13 19:43:40.329333 systemd[1]: cri-containerd-d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff.scope: Deactivated successfully. Feb 13 19:43:40.363737 containerd[1486]: time="2025-02-13T19:43:40.363668516Z" level=info msg="shim disconnected" id=d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff namespace=k8s.io Feb 13 19:43:40.363737 containerd[1486]: time="2025-02-13T19:43:40.363723801Z" level=warning msg="cleaning up after shim disconnected" id=d946e559a78699bff1112c50dad1920857cc7e6cf2734956b9f3143513703eff namespace=k8s.io Feb 13 19:43:40.363737 containerd[1486]: time="2025-02-13T19:43:40.363733700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:41.076797 kubelet[2594]: E0213 19:43:41.076742 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:41.079808 containerd[1486]: time="2025-02-13T19:43:41.079757555Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:43:41.096258 containerd[1486]: time="2025-02-13T19:43:41.096173797Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4\"" Feb 13 19:43:41.096788 containerd[1486]: time="2025-02-13T19:43:41.096725963Z" level=info msg="StartContainer for \"a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4\"" Feb 13 19:43:41.129545 systemd[1]: Started cri-containerd-a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4.scope - libcontainer container a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4. Feb 13 19:43:41.155223 containerd[1486]: time="2025-02-13T19:43:41.155153432Z" level=info msg="StartContainer for \"a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4\" returns successfully" Feb 13 19:43:41.161412 systemd[1]: cri-containerd-a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4.scope: Deactivated successfully. Feb 13 19:43:41.189167 containerd[1486]: time="2025-02-13T19:43:41.189100647Z" level=info msg="shim disconnected" id=a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4 namespace=k8s.io Feb 13 19:43:41.189167 containerd[1486]: time="2025-02-13T19:43:41.189154791Z" level=warning msg="cleaning up after shim disconnected" id=a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4 namespace=k8s.io Feb 13 19:43:41.189167 containerd[1486]: time="2025-02-13T19:43:41.189163106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:41.978556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3125a7dee617f74f921088e2be1398f89eb05dff6bed0821e431422de108fc4-rootfs.mount: Deactivated successfully. Feb 13 19:43:42.079845 kubelet[2594]: E0213 19:43:42.079803 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:42.082253 containerd[1486]: time="2025-02-13T19:43:42.082176162Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:43:42.100101 containerd[1486]: time="2025-02-13T19:43:42.100034248Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299\"" Feb 13 19:43:42.100661 containerd[1486]: time="2025-02-13T19:43:42.100633083Z" level=info msg="StartContainer for \"2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299\"" Feb 13 19:43:42.138554 systemd[1]: Started cri-containerd-2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299.scope - libcontainer container 2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299. Feb 13 19:43:42.170865 containerd[1486]: time="2025-02-13T19:43:42.170822692Z" level=info msg="StartContainer for \"2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299\" returns successfully" Feb 13 19:43:42.172582 systemd[1]: cri-containerd-2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299.scope: Deactivated successfully. Feb 13 19:43:42.199958 containerd[1486]: time="2025-02-13T19:43:42.199890828Z" level=info msg="shim disconnected" id=2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299 namespace=k8s.io Feb 13 19:43:42.200371 containerd[1486]: time="2025-02-13T19:43:42.199960852Z" level=warning msg="cleaning up after shim disconnected" id=2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299 namespace=k8s.io Feb 13 19:43:42.200371 containerd[1486]: time="2025-02-13T19:43:42.199970910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:42.978569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a44b0f6ec5883573c7645c5e2099f5f8e8462c918a097224ff052b3e7cbe299-rootfs.mount: Deactivated successfully. Feb 13 19:43:43.083259 kubelet[2594]: E0213 19:43:43.083227 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:43.085255 containerd[1486]: time="2025-02-13T19:43:43.085195660Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:43:43.100739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252851820.mount: Deactivated successfully. Feb 13 19:43:43.102337 containerd[1486]: time="2025-02-13T19:43:43.102305922Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff\"" Feb 13 19:43:43.102901 containerd[1486]: time="2025-02-13T19:43:43.102870701Z" level=info msg="StartContainer for \"15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff\"" Feb 13 19:43:43.133539 systemd[1]: Started cri-containerd-15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff.scope - libcontainer container 15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff. Feb 13 19:43:43.157681 systemd[1]: cri-containerd-15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff.scope: Deactivated successfully. Feb 13 19:43:43.160269 containerd[1486]: time="2025-02-13T19:43:43.160228624Z" level=info msg="StartContainer for \"15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff\" returns successfully" Feb 13 19:43:43.183650 containerd[1486]: time="2025-02-13T19:43:43.183579657Z" level=info msg="shim disconnected" id=15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff namespace=k8s.io Feb 13 19:43:43.183650 containerd[1486]: time="2025-02-13T19:43:43.183633239Z" level=warning msg="cleaning up after shim disconnected" id=15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff namespace=k8s.io Feb 13 19:43:43.183650 containerd[1486]: time="2025-02-13T19:43:43.183641365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:43.931059 kubelet[2594]: E0213 19:43:43.931003 2594 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:43:43.978840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15bde6c81578f065db0645149e5293aa214106dab1307bb3d77b34e1cd0d47ff-rootfs.mount: Deactivated successfully. Feb 13 19:43:44.086448 kubelet[2594]: E0213 19:43:44.086401 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:44.088196 containerd[1486]: time="2025-02-13T19:43:44.088157593Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:43:44.120618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403939457.mount: Deactivated successfully. Feb 13 19:43:44.122386 containerd[1486]: time="2025-02-13T19:43:44.122337054Z" level=info msg="CreateContainer within sandbox \"0f5b99ad72e3f7ea16104cd50959ce4e218b1a53996bc898e65c96d12be11194\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458\"" Feb 13 19:43:44.122990 containerd[1486]: time="2025-02-13T19:43:44.122944956Z" level=info msg="StartContainer for \"bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458\"" Feb 13 19:43:44.156656 systemd[1]: Started cri-containerd-bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458.scope - libcontainer container bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458. Feb 13 19:43:44.193559 containerd[1486]: time="2025-02-13T19:43:44.193393147Z" level=info msg="StartContainer for \"bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458\" returns successfully" Feb 13 19:43:44.600488 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:43:45.091308 kubelet[2594]: E0213 19:43:45.091273 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:45.105515 kubelet[2594]: I0213 19:43:45.105457 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gdhtw" podStartSLOduration=6.105441554 podStartE2EDuration="6.105441554s" podCreationTimestamp="2025-02-13 19:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:45.105081953 +0000 UTC m=+91.322366564" watchObservedRunningTime="2025-02-13 19:43:45.105441554 +0000 UTC m=+91.322726165" Feb 13 19:43:46.092848 kubelet[2594]: I0213 19:43:46.092788 2594 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:43:46Z","lastTransitionTime":"2025-02-13T19:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:43:46.160666 kubelet[2594]: E0213 19:43:46.160601 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:46.366016 systemd[1]: run-containerd-runc-k8s.io-bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458-runc.F80klf.mount: Deactivated successfully. Feb 13 19:43:47.688136 systemd-networkd[1379]: lxc_health: Link UP Feb 13 19:43:47.698630 systemd-networkd[1379]: lxc_health: Gained carrier Feb 13 19:43:47.862465 kubelet[2594]: E0213 19:43:47.861303 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:48.162170 kubelet[2594]: E0213 19:43:48.161515 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:48.496990 systemd[1]: run-containerd-runc-k8s.io-bc3334c0b7037fb2695a9cb3d07aea6015e350fa347ccd7482c234f8cd5f8458-runc.UgzbnQ.mount: Deactivated successfully. Feb 13 19:43:49.098621 kubelet[2594]: E0213 19:43:49.098593 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:49.662925 systemd-networkd[1379]: lxc_health: Gained IPv6LL Feb 13 19:43:50.100269 kubelet[2594]: E0213 19:43:50.100237 2594 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:52.721473 sshd[4435]: Connection closed by 10.0.0.1 port 52394 Feb 13 19:43:52.722017 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:52.727039 systemd[1]: sshd@27-10.0.0.105:22-10.0.0.1:52394.service: Deactivated successfully. Feb 13 19:43:52.729198 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:43:52.729822 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:43:52.730737 systemd-logind[1469]: Removed session 28.