Feb 13 19:32:06.898087 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:32:06.898109 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:06.898121 kernel: BIOS-provided physical RAM map: Feb 13 19:32:06.898127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:32:06.898134 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:32:06.898140 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:32:06.898148 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:32:06.898155 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:32:06.898161 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:32:06.898170 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:32:06.898185 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:32:06.898192 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:32:06.898203 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:32:06.898210 kernel: NX (Execute Disable) protection: active Feb 13 19:32:06.898218 kernel: APIC: Static calls initialized Feb 13 19:32:06.898227 kernel: SMBIOS 2.8 present. Feb 13 19:32:06.898235 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:32:06.898242 kernel: Hypervisor detected: KVM Feb 13 19:32:06.898249 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:32:06.898256 kernel: kvm-clock: using sched offset of 3657158559 cycles Feb 13 19:32:06.898263 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:32:06.898271 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:32:06.898278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:32:06.898286 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:32:06.898293 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:32:06.898303 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:32:06.898310 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:32:06.898318 kernel: Using GB pages for direct mapping Feb 13 19:32:06.898325 kernel: ACPI: Early table checksum verification disabled Feb 13 19:32:06.898332 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:32:06.898339 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898347 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898354 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898361 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:32:06.898371 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898378 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898386 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898393 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:06.898400 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:32:06.898408 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:32:06.898418 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:32:06.898428 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:32:06.898435 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:32:06.898443 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:32:06.898450 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:32:06.898461 kernel: No NUMA configuration found Feb 13 19:32:06.898468 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:32:06.898476 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:32:06.898486 kernel: Zone ranges: Feb 13 19:32:06.898493 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:32:06.898518 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:32:06.898526 kernel: Normal empty Feb 13 19:32:06.898533 kernel: Movable zone start for each node Feb 13 19:32:06.898541 kernel: Early memory node ranges Feb 13 19:32:06.898548 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:32:06.898556 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:32:06.898563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:32:06.898573 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:32:06.898583 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:32:06.898591 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:32:06.898598 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:32:06.898606 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:32:06.898613 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:32:06.898621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:32:06.898628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:32:06.898636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:32:06.898646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:32:06.898653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:32:06.898661 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:32:06.898668 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:32:06.898676 kernel: TSC deadline timer available Feb 13 19:32:06.898683 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:32:06.898691 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:32:06.898698 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:32:06.898706 kernel: kvm-guest: setup PV sched yield Feb 13 19:32:06.898714 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:32:06.898723 kernel: Booting paravirtualized kernel on KVM Feb 13 19:32:06.898731 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:32:06.898739 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:32:06.898746 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:32:06.898754 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:32:06.898761 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:32:06.898769 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:32:06.898776 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:32:06.898785 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:06.898795 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:32:06.898802 kernel: random: crng init done Feb 13 19:32:06.898810 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:32:06.898818 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:32:06.898825 kernel: Fallback order for Node 0: 0 Feb 13 19:32:06.898833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:32:06.898840 kernel: Policy zone: DMA32 Feb 13 19:32:06.898848 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:32:06.898858 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Feb 13 19:32:06.898865 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:32:06.898873 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:32:06.898881 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:32:06.898888 kernel: Dynamic Preempt: voluntary Feb 13 19:32:06.898895 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:32:06.899021 kernel: rcu: RCU event tracing is enabled. Feb 13 19:32:06.899029 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:32:06.899037 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:32:06.899046 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:32:06.899054 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:32:06.899062 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:32:06.899071 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:32:06.899079 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:32:06.899087 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:32:06.899094 kernel: Console: colour VGA+ 80x25 Feb 13 19:32:06.899101 kernel: printk: console [ttyS0] enabled Feb 13 19:32:06.899109 kernel: ACPI: Core revision 20230628 Feb 13 19:32:06.899119 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:32:06.899126 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:32:06.899134 kernel: x2apic enabled Feb 13 19:32:06.899141 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:32:06.899149 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:32:06.899157 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:32:06.899165 kernel: kvm-guest: setup PV IPIs Feb 13 19:32:06.899191 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:32:06.899199 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:32:06.899207 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:32:06.899215 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:32:06.899223 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:32:06.899233 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:32:06.899241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:32:06.899249 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:32:06.899257 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:32:06.899265 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:32:06.899275 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:32:06.899283 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:32:06.899291 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:32:06.899299 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:32:06.899307 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:32:06.899315 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:32:06.899323 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:32:06.899331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:32:06.899341 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:32:06.899349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:32:06.899357 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:32:06.899365 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:32:06.899373 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:32:06.899381 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:32:06.899388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:32:06.899396 kernel: landlock: Up and running. Feb 13 19:32:06.899404 kernel: SELinux: Initializing. Feb 13 19:32:06.899414 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:06.899422 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:06.899430 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:32:06.899438 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:06.899446 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:06.899454 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:06.899462 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:32:06.899472 kernel: ... version: 0 Feb 13 19:32:06.899480 kernel: ... bit width: 48 Feb 13 19:32:06.899490 kernel: ... generic registers: 6 Feb 13 19:32:06.899601 kernel: ... value mask: 0000ffffffffffff Feb 13 19:32:06.899611 kernel: ... max period: 00007fffffffffff Feb 13 19:32:06.899619 kernel: ... fixed-purpose events: 0 Feb 13 19:32:06.899627 kernel: ... event mask: 000000000000003f Feb 13 19:32:06.899635 kernel: signal: max sigframe size: 1776 Feb 13 19:32:06.899643 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:32:06.899651 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:32:06.899660 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:32:06.899671 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:32:06.899679 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:32:06.899686 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:32:06.899694 kernel: smpboot: Max logical packages: 1 Feb 13 19:32:06.899702 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:32:06.899710 kernel: devtmpfs: initialized Feb 13 19:32:06.899718 kernel: x86/mm: Memory block size: 128MB Feb 13 19:32:06.899726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:32:06.899734 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:32:06.899744 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:32:06.899751 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:32:06.899759 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:32:06.899767 kernel: audit: type=2000 audit(1739475125.791:1): state=initialized audit_enabled=0 res=1 Feb 13 19:32:06.899775 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:32:06.899782 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:32:06.899790 kernel: cpuidle: using governor menu Feb 13 19:32:06.899798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:32:06.899806 kernel: dca service started, version 1.12.1 Feb 13 19:32:06.899817 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:32:06.899827 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:32:06.899835 kernel: PCI: Using configuration type 1 for base access Feb 13 19:32:06.899845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:32:06.899853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:32:06.899860 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:32:06.899868 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:32:06.899876 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:32:06.899884 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:32:06.899894 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:32:06.899902 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:32:06.899910 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:32:06.899918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:32:06.899926 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:32:06.899933 kernel: ACPI: Interpreter enabled Feb 13 19:32:06.899941 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:32:06.899949 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:32:06.899957 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:32:06.899967 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:32:06.899975 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:32:06.899983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:32:06.900182 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:32:06.900319 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:32:06.900443 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:32:06.900454 kernel: PCI host bridge to bus 0000:00 Feb 13 19:32:06.900601 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:32:06.900714 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:32:06.900828 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:32:06.900940 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:32:06.901050 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:32:06.901162 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:32:06.901284 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:32:06.901455 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:32:06.901627 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:32:06.901755 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:32:06.901879 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:32:06.902001 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:32:06.902123 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:32:06.902266 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:32:06.902396 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:32:06.902539 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:32:06.902679 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:32:06.902813 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:32:06.902938 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:32:06.903059 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:32:06.903196 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:32:06.903329 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:32:06.903454 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:32:06.903634 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:32:06.903756 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:32:06.903876 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:32:06.904009 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:32:06.904136 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:32:06.904277 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:32:06.904399 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:32:06.904535 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:32:06.904667 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:32:06.904789 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:32:06.904800 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:32:06.904813 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:32:06.904820 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:32:06.904828 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:32:06.904836 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:32:06.904844 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:32:06.904852 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:32:06.904860 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:32:06.904868 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:32:06.904875 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:32:06.904885 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:32:06.904893 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:32:06.904901 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:32:06.904909 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:32:06.904917 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:32:06.904925 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:32:06.904932 kernel: iommu: Default domain type: Translated Feb 13 19:32:06.904941 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:32:06.904948 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:32:06.904959 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:32:06.904967 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:32:06.904975 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:32:06.905212 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:32:06.905334 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:32:06.905455 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:32:06.905466 kernel: vgaarb: loaded Feb 13 19:32:06.905474 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:32:06.905485 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:32:06.905493 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:32:06.905514 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:32:06.905522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:32:06.905530 kernel: pnp: PnP ACPI init Feb 13 19:32:06.905666 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:32:06.905678 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:32:06.905686 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:32:06.905698 kernel: NET: Registered PF_INET protocol family Feb 13 19:32:06.905706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:32:06.905714 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:32:06.905723 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:32:06.905731 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:32:06.905739 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:32:06.905747 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:32:06.905755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:06.905763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:06.905773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:32:06.905781 kernel: NET: Registered PF_XDP protocol family Feb 13 19:32:06.905894 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:32:06.906007 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:32:06.906118 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:32:06.906241 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:32:06.906353 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:32:06.906481 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:32:06.906518 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:32:06.906527 kernel: Initialise system trusted keyrings Feb 13 19:32:06.906537 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:32:06.906547 kernel: Key type asymmetric registered Feb 13 19:32:06.906557 kernel: Asymmetric key parser 'x509' registered Feb 13 19:32:06.906568 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:32:06.906577 kernel: io scheduler mq-deadline registered Feb 13 19:32:06.906587 kernel: io scheduler kyber registered Feb 13 19:32:06.906597 kernel: io scheduler bfq registered Feb 13 19:32:06.906611 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:32:06.906623 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:32:06.906633 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:32:06.906641 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:32:06.906649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:32:06.906658 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:32:06.906666 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:32:06.906674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:32:06.906682 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:32:06.906904 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:32:06.906936 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:32:06.907054 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:32:06.907170 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:32:06 UTC (1739475126) Feb 13 19:32:06.907296 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:32:06.907307 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:32:06.907315 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:32:06.907323 kernel: Segment Routing with IPv6 Feb 13 19:32:06.907335 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:32:06.907343 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:32:06.907351 kernel: Key type dns_resolver registered Feb 13 19:32:06.907359 kernel: IPI shorthand broadcast: enabled Feb 13 19:32:06.907367 kernel: sched_clock: Marking stable (589003380, 107268309)->(746933016, -50661327) Feb 13 19:32:06.907375 kernel: registered taskstats version 1 Feb 13 19:32:06.907383 kernel: Loading compiled-in X.509 certificates Feb 13 19:32:06.907391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:32:06.907399 kernel: Key type .fscrypt registered Feb 13 19:32:06.907409 kernel: Key type fscrypt-provisioning registered Feb 13 19:32:06.907417 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:32:06.907425 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:32:06.907433 kernel: ima: No architecture policies found Feb 13 19:32:06.907441 kernel: clk: Disabling unused clocks Feb 13 19:32:06.907449 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:32:06.907457 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:32:06.907465 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:32:06.907473 kernel: Run /init as init process Feb 13 19:32:06.907483 kernel: with arguments: Feb 13 19:32:06.907491 kernel: /init Feb 13 19:32:06.907512 kernel: with environment: Feb 13 19:32:06.907520 kernel: HOME=/ Feb 13 19:32:06.907528 kernel: TERM=linux Feb 13 19:32:06.907536 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:32:06.907546 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:32:06.907556 systemd[1]: Detected virtualization kvm. Feb 13 19:32:06.907568 systemd[1]: Detected architecture x86-64. Feb 13 19:32:06.907577 systemd[1]: Running in initrd. Feb 13 19:32:06.907585 systemd[1]: No hostname configured, using default hostname. Feb 13 19:32:06.907594 systemd[1]: Hostname set to . Feb 13 19:32:06.907603 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:32:06.907611 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:32:06.907620 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:06.907628 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:06.907640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:32:06.907663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:06.907674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:32:06.907683 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:32:06.907694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:32:06.907705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:32:06.907714 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:06.907723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:06.907732 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:06.907741 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:06.907749 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:06.907758 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:06.907767 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:06.907778 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:06.907787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:32:06.907796 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:32:06.907805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:06.907814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:06.907823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:06.907832 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:06.907840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:32:06.907851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:06.907872 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:32:06.907884 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:32:06.907895 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:06.907906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:06.907915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:06.907924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:06.907933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:06.907945 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:32:06.907957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:32:06.907989 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 19:32:06.908021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:32:06.908035 systemd-journald[192]: Journal started Feb 13 19:32:06.908092 systemd-journald[192]: Runtime Journal (/run/log/journal/854922707fee4a69b608e344340c57e6) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:32:06.908561 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:32:06.940404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:32:06.940438 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:06.941751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:06.944054 kernel: Bridge firewalling registered Feb 13 19:32:06.944054 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:32:06.960669 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:06.961943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:06.962847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:06.963400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:06.967571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:06.978409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:06.980056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:06.983804 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:06.985910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:07.001877 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:32:07.005674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:07.013390 dracut-cmdline[230]: dracut-dracut-053 Feb 13 19:32:07.016389 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:07.038038 systemd-resolved[234]: Positive Trust Anchors: Feb 13 19:32:07.038056 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:07.038086 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:07.040522 systemd-resolved[234]: Defaulting to hostname 'linux'. Feb 13 19:32:07.041578 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:07.049712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:07.117537 kernel: SCSI subsystem initialized Feb 13 19:32:07.129526 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:32:07.142526 kernel: iscsi: registered transport (tcp) Feb 13 19:32:07.164524 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:32:07.164554 kernel: QLogic iSCSI HBA Driver Feb 13 19:32:07.217023 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:07.230851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:32:07.260536 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:32:07.260621 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:32:07.260633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:32:07.306534 kernel: raid6: avx2x4 gen() 30344 MB/s Feb 13 19:32:07.323523 kernel: raid6: avx2x2 gen() 30158 MB/s Feb 13 19:32:07.340722 kernel: raid6: avx2x1 gen() 24206 MB/s Feb 13 19:32:07.340746 kernel: raid6: using algorithm avx2x4 gen() 30344 MB/s Feb 13 19:32:07.358621 kernel: raid6: .... xor() 7652 MB/s, rmw enabled Feb 13 19:32:07.358677 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:32:07.379530 kernel: xor: automatically using best checksumming function avx Feb 13 19:32:07.532530 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:32:07.547956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:07.559805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:07.574677 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:32:07.579415 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:07.587774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:32:07.604016 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 19:32:07.643374 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:07.656912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:07.720887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:07.729923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:32:07.747439 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:07.751075 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:07.752720 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:32:07.793317 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:32:07.793335 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:32:07.793479 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:32:07.793517 kernel: GPT:9289727 != 19775487 Feb 13 19:32:07.793528 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:32:07.793539 kernel: GPT:9289727 != 19775487 Feb 13 19:32:07.793549 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:32:07.793559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:07.793569 kernel: libata version 3.00 loaded. Feb 13 19:32:07.754466 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:07.755953 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:07.797544 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:32:07.764834 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:32:07.777671 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:07.791301 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:07.791414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:07.793371 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:07.796091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:07.796199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:07.810186 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:32:07.842724 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:32:07.842742 kernel: AES CTR mode by8 optimization enabled Feb 13 19:32:07.842753 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:32:07.842906 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:32:07.843043 kernel: scsi host0: ahci Feb 13 19:32:07.843211 kernel: scsi host1: ahci Feb 13 19:32:07.843356 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Feb 13 19:32:07.843374 kernel: scsi host2: ahci Feb 13 19:32:07.843608 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) Feb 13 19:32:07.843621 kernel: scsi host3: ahci Feb 13 19:32:07.843775 kernel: scsi host4: ahci Feb 13 19:32:07.843917 kernel: scsi host5: ahci Feb 13 19:32:07.844064 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:32:07.844080 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:32:07.844091 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:32:07.844102 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:32:07.844112 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:32:07.844123 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:32:07.808002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:07.821084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:07.854722 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:32:07.890754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:32:07.893672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:07.903207 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:32:07.907822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:32:07.908073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:32:07.923619 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:32:07.924909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:07.939815 disk-uuid[556]: Primary Header is updated. Feb 13 19:32:07.939815 disk-uuid[556]: Secondary Entries is updated. Feb 13 19:32:07.939815 disk-uuid[556]: Secondary Header is updated. Feb 13 19:32:07.945543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:07.945591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:08.151949 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:08.152046 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:08.152062 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:08.152094 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:08.153530 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:32:08.154528 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:08.154547 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:32:08.155537 kernel: ata3.00: applying bridge limits Feb 13 19:32:08.156547 kernel: ata3.00: configured for UDMA/100 Feb 13 19:32:08.156574 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:32:08.206545 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:32:08.219484 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:32:08.219528 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:32:08.953327 disk-uuid[564]: The operation has completed successfully. Feb 13 19:32:08.954604 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:08.985172 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:32:08.985341 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:32:09.014766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:32:09.020455 sh[592]: Success Feb 13 19:32:09.034537 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:32:09.070458 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:32:09.084325 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:32:09.087158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:32:09.099246 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:32:09.099294 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:09.099305 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:32:09.100269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:32:09.101647 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:32:09.105955 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:32:09.106813 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:32:09.114647 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:32:09.116453 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:32:09.126971 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:09.127020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:09.127037 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:09.129541 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:09.139551 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:32:09.142530 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:09.152159 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:32:09.160704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:32:09.218253 ignition[692]: Ignition 2.20.0 Feb 13 19:32:09.218263 ignition[692]: Stage: fetch-offline Feb 13 19:32:09.218303 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:09.218312 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:09.218393 ignition[692]: parsed url from cmdline: "" Feb 13 19:32:09.218398 ignition[692]: no config URL provided Feb 13 19:32:09.218402 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:32:09.218411 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:32:09.218439 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 19:32:09.218445 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:32:09.225788 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 19:32:09.240447 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:09.254743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:09.269638 ignition[692]: parsing config with SHA512: f96871609ae95bfe9ee8216d440ea61121bf25340506e9c0f18f2c496463a85c624affc9b7f1a698e70f38f2336e03afacdbb19cbf6bd420841f638e93f5d333 Feb 13 19:32:09.273577 unknown[692]: fetched base config from "system" Feb 13 19:32:09.273591 unknown[692]: fetched user config from "qemu" Feb 13 19:32:09.275296 ignition[692]: fetch-offline: fetch-offline passed Feb 13 19:32:09.275447 ignition[692]: Ignition finished successfully Feb 13 19:32:09.279704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:09.282616 systemd-networkd[781]: lo: Link UP Feb 13 19:32:09.282626 systemd-networkd[781]: lo: Gained carrier Feb 13 19:32:09.284199 systemd-networkd[781]: Enumeration completed Feb 13 19:32:09.284316 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:09.284632 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:09.284636 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:09.286027 systemd-networkd[781]: eth0: Link UP Feb 13 19:32:09.286031 systemd-networkd[781]: eth0: Gained carrier Feb 13 19:32:09.286039 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:09.286714 systemd[1]: Reached target network.target - Network. Feb 13 19:32:09.288596 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:32:09.299656 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:32:09.301554 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:32:09.312355 ignition[784]: Ignition 2.20.0 Feb 13 19:32:09.312367 ignition[784]: Stage: kargs Feb 13 19:32:09.312545 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:09.312556 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:09.313345 ignition[784]: kargs: kargs passed Feb 13 19:32:09.313389 ignition[784]: Ignition finished successfully Feb 13 19:32:09.316886 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:32:09.328671 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:32:09.341352 ignition[793]: Ignition 2.20.0 Feb 13 19:32:09.341362 ignition[793]: Stage: disks Feb 13 19:32:09.341544 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:09.341555 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:09.342342 ignition[793]: disks: disks passed Feb 13 19:32:09.344321 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:32:09.342388 ignition[793]: Ignition finished successfully Feb 13 19:32:09.346791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:09.348790 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:32:09.350939 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:09.353058 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:09.355301 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:09.367746 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:32:09.381884 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.22 Feb 13 19:32:09.381899 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Feb 13 19:32:09.385825 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:32:09.392966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:32:09.405736 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:32:09.494548 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:32:09.495590 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:32:09.497335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:09.515658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:09.517938 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:32:09.519281 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:32:09.519328 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:32:09.531866 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Feb 13 19:32:09.531897 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:09.531921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:09.531936 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:09.531950 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:09.519350 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:09.527300 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:32:09.532991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:09.536350 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:32:09.573251 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:32:09.578734 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:32:09.582622 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:32:09.587701 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:32:09.682840 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:09.693667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:32:09.695222 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:32:09.706525 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:09.720441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:32:09.727027 ignition[925]: INFO : Ignition 2.20.0 Feb 13 19:32:09.727027 ignition[925]: INFO : Stage: mount Feb 13 19:32:09.728751 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:09.728751 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:09.728751 ignition[925]: INFO : mount: mount passed Feb 13 19:32:09.728751 ignition[925]: INFO : Ignition finished successfully Feb 13 19:32:09.734406 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:32:09.746612 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:32:10.098860 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:32:10.110672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:10.117532 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Feb 13 19:32:10.119804 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:10.119836 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:10.119851 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:10.123537 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:10.124767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:10.150386 ignition[956]: INFO : Ignition 2.20.0 Feb 13 19:32:10.150386 ignition[956]: INFO : Stage: files Feb 13 19:32:10.152394 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:10.152394 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:10.152394 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:32:10.156204 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:32:10.156204 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:32:10.156204 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:32:10.160596 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:32:10.160596 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:32:10.160596 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:32:10.160596 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:32:10.156903 unknown[956]: wrote ssh authorized keys file for user: core Feb 13 19:32:10.202960 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:32:10.419141 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:32:10.419141 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:32:10.423394 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:32:10.926841 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:32:10.994526 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:32:10.996619 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:32:11.047800 systemd-networkd[781]: eth0: Gained IPv6LL Feb 13 19:32:11.294868 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:32:11.599848 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:32:11.599848 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:32:11.604155 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:32:11.625969 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:32:11.631186 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:32:11.632857 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:32:11.632857 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:11.632857 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:11.632857 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:11.632857 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:11.632857 ignition[956]: INFO : files: files passed Feb 13 19:32:11.632857 ignition[956]: INFO : Ignition finished successfully Feb 13 19:32:11.634459 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:32:11.649645 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:32:11.651669 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:32:11.654214 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:32:11.654363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:32:11.662857 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:32:11.665779 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:11.667460 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:11.670301 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:11.668659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:11.670477 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:32:11.677665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:32:11.701936 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:32:11.702070 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:32:11.704648 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:32:11.706817 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:32:11.707274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:32:11.708052 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:32:11.725127 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:11.736644 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:32:11.748583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:11.749098 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:11.751395 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:32:11.751758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:32:11.751857 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:11.757496 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:32:11.758118 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:32:11.761193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:32:11.761563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:11.765265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:11.765815 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:32:11.769906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:11.771850 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:32:11.772213 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:32:11.772548 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:32:11.777870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:32:11.777977 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:11.779744 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:11.782068 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:11.784011 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:32:11.786773 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:11.788051 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:32:11.788169 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:11.790774 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:32:11.790878 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:11.792532 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:32:11.794314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:32:11.796558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:11.797062 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:32:11.797372 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:32:11.797902 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:32:11.798003 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:11.798419 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:32:11.798524 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:11.798923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:32:11.799051 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:11.799421 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:32:11.799536 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:32:11.809659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:32:11.811072 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:32:11.811202 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:11.824868 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 19:32:11.824868 ignition[1011]: INFO : Stage: umount Feb 13 19:32:11.824868 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:11.824868 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:11.814016 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:32:11.833073 ignition[1011]: INFO : umount: umount passed Feb 13 19:32:11.833073 ignition[1011]: INFO : Ignition finished successfully Feb 13 19:32:11.815018 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:32:11.815270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:11.818920 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:32:11.819211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:11.826332 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:32:11.826454 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:32:11.828259 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:32:11.828368 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:32:11.832044 systemd[1]: Stopped target network.target - Network. Feb 13 19:32:11.833087 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:32:11.833145 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:32:11.834913 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:32:11.834959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:32:11.836796 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:32:11.836843 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:32:11.838850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:32:11.838899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:11.841234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:32:11.843311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:32:11.846763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:32:11.850119 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:32:11.850272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:32:11.850540 systemd-networkd[781]: eth0: DHCPv6 lease lost Feb 13 19:32:11.853562 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:32:11.853704 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:32:11.856430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:32:11.856531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:11.869695 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:32:11.870837 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:32:11.870928 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:11.873265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:32:11.873330 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:11.875333 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:32:11.875398 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:11.877988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:32:11.878066 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:11.880705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:11.891194 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:32:11.891362 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:32:11.906538 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:32:11.906777 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:11.909379 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:32:11.909452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:11.911467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:32:11.911546 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:11.913554 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:32:11.913626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:11.915966 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:32:11.916047 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:11.917909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:11.917971 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:11.932705 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:32:11.933913 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:32:11.933994 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:11.936582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:11.936646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:11.941435 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:32:11.941597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:32:12.094869 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:32:12.095066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:32:12.097748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:32:12.099009 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:32:12.099104 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:12.108766 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:32:12.118515 systemd[1]: Switching root. Feb 13 19:32:12.158624 systemd-journald[192]: Journal stopped Feb 13 19:32:13.369780 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 19:32:13.369846 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:32:13.369864 kernel: SELinux: policy capability open_perms=1 Feb 13 19:32:13.369878 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:32:13.369889 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:32:13.369901 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:32:13.369912 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:32:13.369927 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:32:13.369946 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:32:13.369957 kernel: audit: type=1403 audit(1739475132.630:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:32:13.369970 systemd[1]: Successfully loaded SELinux policy in 45.932ms. Feb 13 19:32:13.369991 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.010ms. Feb 13 19:32:13.370004 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:32:13.370025 systemd[1]: Detected virtualization kvm. Feb 13 19:32:13.370038 systemd[1]: Detected architecture x86-64. Feb 13 19:32:13.370050 systemd[1]: Detected first boot. Feb 13 19:32:13.370065 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:32:13.370080 zram_generator::config[1055]: No configuration found. Feb 13 19:32:13.370094 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:32:13.370113 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:32:13.370127 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:32:13.370141 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:32:13.370154 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:32:13.370166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:32:13.370185 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:32:13.370197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:32:13.370210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:32:13.370222 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:32:13.370234 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:32:13.370246 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:32:13.370259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:13.370271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:13.370284 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:32:13.370298 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:32:13.370311 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:32:13.370323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:13.370335 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:32:13.370348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:13.370361 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:32:13.370374 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:32:13.370386 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:13.370400 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:32:13.370413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:13.370425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:13.370437 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:13.370449 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:13.370462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:32:13.370474 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:32:13.370487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:13.370512 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:13.370527 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:13.370540 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:32:13.370552 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:32:13.370565 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:32:13.370577 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:32:13.370589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:13.370601 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:32:13.370613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:32:13.370628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:32:13.370647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:32:13.370659 systemd[1]: Reached target machines.target - Containers. Feb 13 19:32:13.370672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:32:13.370684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:13.370697 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:13.370709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:32:13.370721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:13.370733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:13.370751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:13.370767 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:32:13.370783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:13.370797 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:32:13.370813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:32:13.370828 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:32:13.370840 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:32:13.370852 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:32:13.370867 kernel: fuse: init (API version 7.39) Feb 13 19:32:13.370878 kernel: loop: module loaded Feb 13 19:32:13.370890 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:13.370902 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:13.370915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:32:13.370929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:32:13.370941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:13.370976 systemd-journald[1129]: Collecting audit messages is disabled. Feb 13 19:32:13.371001 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:32:13.371014 systemd[1]: Stopped verity-setup.service. Feb 13 19:32:13.371037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:13.371050 systemd-journald[1129]: Journal started Feb 13 19:32:13.371072 systemd-journald[1129]: Runtime Journal (/run/log/journal/854922707fee4a69b608e344340c57e6) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:32:13.154035 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:32:13.174558 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:32:13.175032 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:32:13.373586 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:13.376535 kernel: ACPI: bus type drm_connector registered Feb 13 19:32:13.376969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:32:13.378319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:32:13.379606 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:32:13.380765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:32:13.382006 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:32:13.383203 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:32:13.384470 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:32:13.385900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:13.387412 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:32:13.387655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:32:13.389130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:13.389298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:13.390710 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:13.390882 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:13.392243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:13.392412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:13.393899 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:32:13.394075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:32:13.395467 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:13.395654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:13.397009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:13.398402 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:32:13.399914 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:32:13.413081 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:32:13.428626 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:32:13.431058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:32:13.432390 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:32:13.432479 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:13.434615 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:32:13.437141 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:32:13.439481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:32:13.440713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:13.443495 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:32:13.446953 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:32:13.448293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:13.450320 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:32:13.451748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:13.461421 systemd-journald[1129]: Time spent on flushing to /var/log/journal/854922707fee4a69b608e344340c57e6 is 16.305ms for 952 entries. Feb 13 19:32:13.461421 systemd-journald[1129]: System Journal (/var/log/journal/854922707fee4a69b608e344340c57e6) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:32:13.484619 systemd-journald[1129]: Received client request to flush runtime journal. Feb 13 19:32:13.457744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:13.473277 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:32:13.476694 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:32:13.483268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:13.486374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:32:13.488564 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:32:13.491099 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:32:13.493379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:32:13.501526 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:32:13.510821 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:32:13.512681 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:32:13.514752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:13.518076 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:32:13.530343 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:32:13.531663 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:32:13.535249 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:32:13.543009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:32:13.552696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:13.556224 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:32:13.556923 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:32:13.560532 kernel: loop1: detected capacity change from 0 to 141000 Feb 13 19:32:13.576768 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:32:13.576788 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:32:13.584112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:13.597531 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 19:32:13.640528 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:32:13.653544 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 19:32:13.667538 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 19:32:13.675443 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:32:13.676056 (sd-merge)[1195]: Merged extensions into '/usr'. Feb 13 19:32:13.681441 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:32:13.681557 systemd[1]: Reloading... Feb 13 19:32:13.740532 zram_generator::config[1218]: No configuration found. Feb 13 19:32:13.778515 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:32:13.865897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:13.915029 systemd[1]: Reloading finished in 232 ms. Feb 13 19:32:13.951509 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:32:13.953161 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:32:13.967691 systemd[1]: Starting ensure-sysext.service... Feb 13 19:32:13.970048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:13.975639 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:32:13.975653 systemd[1]: Reloading... Feb 13 19:32:13.994833 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:32:13.995135 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:32:13.996116 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:32:13.996452 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Feb 13 19:32:13.996577 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Feb 13 19:32:14.029670 zram_generator::config[1288]: No configuration found. Feb 13 19:32:14.029460 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:14.029867 systemd-tmpfiles[1259]: Skipping /boot Feb 13 19:32:14.042310 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:14.042436 systemd-tmpfiles[1259]: Skipping /boot Feb 13 19:32:14.139960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:14.189307 systemd[1]: Reloading finished in 213 ms. Feb 13 19:32:14.213865 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:32:14.229167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:14.238374 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:14.240691 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:32:14.243066 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:32:14.247695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:14.251584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:14.254127 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:32:14.258756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.258924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:14.260102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:14.262359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:14.266752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:14.268025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:14.271623 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:32:14.272706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.280726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:14.281384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:14.284824 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:32:14.287071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:14.287392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:14.289093 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:14.289268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:14.289934 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 19:32:14.296211 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:32:14.301772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.301972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:14.310113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:14.313654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:14.316731 augenrules[1359]: No rules Feb 13 19:32:14.317106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:14.318386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:14.321071 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:32:14.322416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.323395 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:32:14.327568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:14.329815 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:14.330065 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:14.331801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:14.332198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:14.336488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:14.336751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:14.339097 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:14.339323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:14.347136 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:32:14.358648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.367887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:14.370753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:14.372877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:14.376739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:14.381841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:14.384291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:14.385572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:14.391751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:14.393653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:14.394828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:32:14.402076 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:32:14.402661 augenrules[1394]: /sbin/augenrules: No change Feb 13 19:32:14.408569 systemd[1]: Finished ensure-sysext.service. Feb 13 19:32:14.409848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:14.410034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:14.414984 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:32:14.417144 augenrules[1424]: No rules Feb 13 19:32:14.424765 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:32:14.426247 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:14.426544 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:14.432192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:14.432394 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:14.434726 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:14.434944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:14.436356 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:14.436572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:14.439989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:14.440108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:14.446783 systemd-resolved[1327]: Positive Trust Anchors: Feb 13 19:32:14.446804 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:14.446844 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:14.450987 systemd-resolved[1327]: Defaulting to hostname 'linux'. Feb 13 19:32:14.453114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:14.455082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:14.468470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1393) Feb 13 19:32:14.502535 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:32:14.512985 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:32:14.514450 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:32:14.515596 systemd-networkd[1407]: lo: Link UP Feb 13 19:32:14.515605 systemd-networkd[1407]: lo: Gained carrier Feb 13 19:32:14.517371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:32:14.519268 systemd-networkd[1407]: Enumeration completed Feb 13 19:32:14.519589 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:32:14.519832 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:14.519844 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:14.521736 systemd-networkd[1407]: eth0: Link UP Feb 13 19:32:14.521748 systemd-networkd[1407]: eth0: Gained carrier Feb 13 19:32:14.521761 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:14.525781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:32:14.527181 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:14.530814 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:32:14.531305 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:32:14.531562 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:32:14.530828 systemd[1]: Reached target network.target - Network. Feb 13 19:32:14.535570 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:32:14.536660 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:32:14.537273 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Feb 13 19:32:14.539024 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:32:14.539079 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-02-13 19:32:14.453618 UTC. Feb 13 19:32:14.546537 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:32:14.549461 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:32:14.576552 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:32:14.587831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:14.654656 kernel: kvm_amd: TSC scaling supported Feb 13 19:32:14.654754 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:32:14.654768 kernel: kvm_amd: Nested Paging enabled Feb 13 19:32:14.655797 kernel: kvm_amd: LBR virtualization supported Feb 13 19:32:14.655860 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:32:14.656914 kernel: kvm_amd: Virtual GIF supported Feb 13 19:32:14.676532 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:32:14.715370 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:32:14.722270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:14.734699 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:32:14.743789 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:14.784319 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:32:14.786012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:14.787182 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:14.788431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:32:14.789740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:32:14.791272 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:32:14.792563 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:32:14.793865 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:32:14.795154 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:32:14.795199 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:14.796162 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:14.798318 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:32:14.801786 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:32:14.812589 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:32:14.815236 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:32:14.816910 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:32:14.818162 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:14.819138 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:14.820154 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:14.820182 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:14.821203 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:32:14.823377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:32:14.827633 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:14.828053 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:32:14.830865 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:32:14.832214 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:32:14.835164 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:32:14.838475 jq[1462]: false Feb 13 19:32:14.838745 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:32:14.844732 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:32:14.848849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:32:14.853014 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:32:14.854658 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:32:14.855200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:32:14.858697 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:32:14.868182 extend-filesystems[1463]: Found loop3 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found loop4 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found loop5 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found sr0 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda1 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda2 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda3 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found usr Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda4 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda6 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda7 Feb 13 19:32:14.868182 extend-filesystems[1463]: Found vda9 Feb 13 19:32:14.868182 extend-filesystems[1463]: Checking size of /dev/vda9 Feb 13 19:32:14.864148 dbus-daemon[1461]: [system] SELinux support is enabled Feb 13 19:32:14.863970 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:32:14.866294 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:32:14.891713 update_engine[1472]: I20250213 19:32:14.887821 1472 main.cc:92] Flatcar Update Engine starting Feb 13 19:32:14.871928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:32:14.879238 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:32:14.892245 jq[1477]: true Feb 13 19:32:14.879447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:32:14.896659 update_engine[1472]: I20250213 19:32:14.895711 1472 update_check_scheduler.cc:74] Next update check in 5m12s Feb 13 19:32:14.896700 extend-filesystems[1463]: Resized partition /dev/vda9 Feb 13 19:32:14.881248 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:32:14.900831 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:32:14.881450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:32:14.890892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:32:14.905519 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:32:14.891149 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:32:14.918033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1377) Feb 13 19:32:14.920889 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:32:14.922639 jq[1486]: true Feb 13 19:32:14.922812 tar[1485]: linux-amd64/helm Feb 13 19:32:14.920917 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:32:14.921252 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:32:14.921275 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:32:14.921951 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:32:14.924299 systemd-logind[1471]: New seat seat0. Feb 13 19:32:14.936943 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:32:14.933625 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:32:14.933655 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:32:14.937012 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:32:14.957695 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:32:14.960269 extend-filesystems[1487]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:32:14.960269 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:32:14.960269 extend-filesystems[1487]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:32:14.966042 extend-filesystems[1463]: Resized filesystem in /dev/vda9 Feb 13 19:32:14.970009 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:32:14.971956 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:32:14.975478 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:32:14.976617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:32:14.999449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:32:15.008760 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:32:15.015328 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:32:15.017625 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:32:15.017872 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:32:15.023358 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:32:15.025803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:32:15.027650 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:32:15.032709 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:32:15.037950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:32:15.047011 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:32:15.050120 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:32:15.052014 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:32:15.139181 containerd[1495]: time="2025-02-13T19:32:15.139088909Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:32:15.163288 containerd[1495]: time="2025-02-13T19:32:15.163240008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.165526 containerd[1495]: time="2025-02-13T19:32:15.165460887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:15.165526 containerd[1495]: time="2025-02-13T19:32:15.165521929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:32:15.165616 containerd[1495]: time="2025-02-13T19:32:15.165554058Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:32:15.165832 containerd[1495]: time="2025-02-13T19:32:15.165751760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:32:15.165832 containerd[1495]: time="2025-02-13T19:32:15.165773804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.165894 containerd[1495]: time="2025-02-13T19:32:15.165845159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:15.165894 containerd[1495]: time="2025-02-13T19:32:15.165858055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166126 containerd[1495]: time="2025-02-13T19:32:15.166053485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166126 containerd[1495]: time="2025-02-13T19:32:15.166074023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166126 containerd[1495]: time="2025-02-13T19:32:15.166088036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166126 containerd[1495]: time="2025-02-13T19:32:15.166105596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166236 containerd[1495]: time="2025-02-13T19:32:15.166203500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166460 containerd[1495]: time="2025-02-13T19:32:15.166438901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166655 containerd[1495]: time="2025-02-13T19:32:15.166619791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:15.166655 containerd[1495]: time="2025-02-13T19:32:15.166638148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:32:15.166754 containerd[1495]: time="2025-02-13T19:32:15.166735544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:32:15.166813 containerd[1495]: time="2025-02-13T19:32:15.166796565Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:32:15.171433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:32:15.172773 containerd[1495]: time="2025-02-13T19:32:15.172745342Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:32:15.172876 containerd[1495]: time="2025-02-13T19:32:15.172808237Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:32:15.172876 containerd[1495]: time="2025-02-13T19:32:15.172831746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:32:15.172876 containerd[1495]: time="2025-02-13T19:32:15.172847621Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:32:15.172876 containerd[1495]: time="2025-02-13T19:32:15.172862939Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:32:15.173015 containerd[1495]: time="2025-02-13T19:32:15.172995294Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:32:15.173273 containerd[1495]: time="2025-02-13T19:32:15.173246213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173353196Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173373536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173388286Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173402547Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173416290Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173428099Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173442171Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173455186Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173468043Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173479981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173494 containerd[1495]: time="2025-02-13T19:32:15.173491332Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173527209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173541272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173553140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173566804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173578553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173591111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173601534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173613782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173626379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173639663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173652479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173663163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173674105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173687280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:32:15.173769 containerd[1495]: time="2025-02-13T19:32:15.173704691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173716669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173726804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173775627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173789957Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173800033Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173812570Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173821569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173833559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173843364Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:32:15.174129 containerd[1495]: time="2025-02-13T19:32:15.173853490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:32:15.174369 containerd[1495]: time="2025-02-13T19:32:15.174132652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:32:15.174369 containerd[1495]: time="2025-02-13T19:32:15.174172923Z" level=info msg="Connect containerd service" Feb 13 19:32:15.174369 containerd[1495]: time="2025-02-13T19:32:15.174253657Z" level=info msg="using legacy CRI server" Feb 13 19:32:15.174369 containerd[1495]: time="2025-02-13T19:32:15.174297556Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:32:15.174678 containerd[1495]: time="2025-02-13T19:32:15.174455116Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.176688950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.176848841Z" level=info msg="Start subscribing containerd event" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.176899259Z" level=info msg="Start recovering state" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.176985503Z" level=info msg="Start event monitor" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.177004228Z" level=info msg="Start snapshots syncer" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.177018968Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:32:15.177338 containerd[1495]: time="2025-02-13T19:32:15.177031226Z" level=info msg="Start streaming server" Feb 13 19:32:15.177551 containerd[1495]: time="2025-02-13T19:32:15.177405950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:32:15.177551 containerd[1495]: time="2025-02-13T19:32:15.177462456Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:32:15.178235 containerd[1495]: time="2025-02-13T19:32:15.178215802Z" level=info msg="containerd successfully booted in 0.040165s" Feb 13 19:32:15.180124 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:36766.service - OpenSSH per-connection server daemon (10.0.0.1:36766). Feb 13 19:32:15.182055 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:32:15.224251 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 36766 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:15.226379 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:15.235202 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:32:15.247906 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:32:15.251619 systemd-logind[1471]: New session 1 of user core. Feb 13 19:32:15.260733 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:32:15.273888 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:32:15.278639 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:32:15.333992 tar[1485]: linux-amd64/LICENSE Feb 13 19:32:15.333992 tar[1485]: linux-amd64/README.md Feb 13 19:32:15.347067 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:32:15.390527 systemd[1554]: Queued start job for default target default.target. Feb 13 19:32:15.400865 systemd[1554]: Created slice app.slice - User Application Slice. Feb 13 19:32:15.400891 systemd[1554]: Reached target paths.target - Paths. Feb 13 19:32:15.400905 systemd[1554]: Reached target timers.target - Timers. Feb 13 19:32:15.402557 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:32:15.415566 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:32:15.415782 systemd[1554]: Reached target sockets.target - Sockets. Feb 13 19:32:15.415811 systemd[1554]: Reached target basic.target - Basic System. Feb 13 19:32:15.415873 systemd[1554]: Reached target default.target - Main User Target. Feb 13 19:32:15.415923 systemd[1554]: Startup finished in 129ms. Feb 13 19:32:15.416255 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:32:15.419081 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:32:15.487855 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:51078.service - OpenSSH per-connection server daemon (10.0.0.1:51078). Feb 13 19:32:15.524816 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 51078 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:15.526432 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:15.531326 systemd-logind[1471]: New session 2 of user core. Feb 13 19:32:15.544794 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:32:15.599557 sshd[1570]: Connection closed by 10.0.0.1 port 51078 Feb 13 19:32:15.599925 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:15.615205 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:51078.service: Deactivated successfully. Feb 13 19:32:15.617171 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:32:15.618760 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:32:15.620416 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:51090.service - OpenSSH per-connection server daemon (10.0.0.1:51090). Feb 13 19:32:15.622945 systemd-logind[1471]: Removed session 2. Feb 13 19:32:15.658013 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51090 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:15.659932 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:15.664215 systemd-logind[1471]: New session 3 of user core. Feb 13 19:32:15.675822 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:32:15.731026 sshd[1577]: Connection closed by 10.0.0.1 port 51090 Feb 13 19:32:15.731532 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:15.735698 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:51090.service: Deactivated successfully. Feb 13 19:32:15.738152 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:32:15.738868 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:32:15.739745 systemd-logind[1471]: Removed session 3. Feb 13 19:32:16.039720 systemd-networkd[1407]: eth0: Gained IPv6LL Feb 13 19:32:16.043001 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:32:16.045075 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:32:16.054785 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:32:16.057671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:16.060080 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:32:16.082265 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:32:16.082572 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:32:16.084862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:32:16.087441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:32:16.714584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:16.716349 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:32:16.718550 systemd[1]: Startup finished in 725ms (kernel) + 5.926s (initrd) + 4.131s (userspace) = 10.784s. Feb 13 19:32:16.727197 agetty[1543]: failed to open credentials directory Feb 13 19:32:16.732274 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:32:16.742743 agetty[1539]: failed to open credentials directory Feb 13 19:32:17.165355 kubelet[1603]: E0213 19:32:17.165218 1603 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:32:17.169810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:32:17.170018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:32:25.707372 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:33480.service - OpenSSH per-connection server daemon (10.0.0.1:33480). Feb 13 19:32:25.743105 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 33480 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:25.744566 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:25.748540 systemd-logind[1471]: New session 4 of user core. Feb 13 19:32:25.759617 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:32:25.814888 sshd[1620]: Connection closed by 10.0.0.1 port 33480 Feb 13 19:32:25.815298 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:25.824761 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:33480.service: Deactivated successfully. Feb 13 19:32:25.827111 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:32:25.828969 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:32:25.846857 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:33494.service - OpenSSH per-connection server daemon (10.0.0.1:33494). Feb 13 19:32:25.847808 systemd-logind[1471]: Removed session 4. Feb 13 19:32:25.878513 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 33494 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:25.879843 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:25.883767 systemd-logind[1471]: New session 5 of user core. Feb 13 19:32:25.894605 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:32:25.943095 sshd[1627]: Connection closed by 10.0.0.1 port 33494 Feb 13 19:32:25.943447 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:25.957492 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:33494.service: Deactivated successfully. Feb 13 19:32:25.959651 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:32:25.961192 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:32:25.978814 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Feb 13 19:32:25.979910 systemd-logind[1471]: Removed session 5. Feb 13 19:32:26.014515 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:26.016199 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:26.020596 systemd-logind[1471]: New session 6 of user core. Feb 13 19:32:26.037605 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:32:26.092607 sshd[1634]: Connection closed by 10.0.0.1 port 33500 Feb 13 19:32:26.092982 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:26.108660 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:33500.service: Deactivated successfully. Feb 13 19:32:26.110612 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:32:26.112435 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:32:26.121781 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:33510.service - OpenSSH per-connection server daemon (10.0.0.1:33510). Feb 13 19:32:26.122840 systemd-logind[1471]: Removed session 6. Feb 13 19:32:26.155761 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 33510 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:26.157560 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:26.161923 systemd-logind[1471]: New session 7 of user core. Feb 13 19:32:26.172657 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:32:26.230717 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:32:26.231052 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:32:26.251947 sudo[1642]: pam_unix(sudo:session): session closed for user root Feb 13 19:32:26.253612 sshd[1641]: Connection closed by 10.0.0.1 port 33510 Feb 13 19:32:26.254018 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:26.265195 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:33510.service: Deactivated successfully. Feb 13 19:32:26.266851 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:32:26.268596 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:32:26.282775 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:33522.service - OpenSSH per-connection server daemon (10.0.0.1:33522). Feb 13 19:32:26.283732 systemd-logind[1471]: Removed session 7. Feb 13 19:32:26.313799 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 33522 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:26.315168 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:26.318982 systemd-logind[1471]: New session 8 of user core. Feb 13 19:32:26.325621 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:32:26.380400 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:32:26.380793 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:32:26.384598 sudo[1651]: pam_unix(sudo:session): session closed for user root Feb 13 19:32:26.390905 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:32:26.391310 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:32:26.410837 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:26.441424 augenrules[1673]: No rules Feb 13 19:32:26.443433 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:26.443753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:26.445027 sudo[1650]: pam_unix(sudo:session): session closed for user root Feb 13 19:32:26.446679 sshd[1649]: Connection closed by 10.0.0.1 port 33522 Feb 13 19:32:26.446991 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:26.457603 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:33522.service: Deactivated successfully. Feb 13 19:32:26.459410 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:32:26.461130 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:32:26.469764 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:33528.service - OpenSSH per-connection server daemon (10.0.0.1:33528). Feb 13 19:32:26.470671 systemd-logind[1471]: Removed session 8. Feb 13 19:32:26.502448 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 33528 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:32:26.504155 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:26.508010 systemd-logind[1471]: New session 9 of user core. Feb 13 19:32:26.517611 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:32:26.570336 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:32:26.570695 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:32:26.866720 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:32:26.866905 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:32:27.109229 dockerd[1704]: time="2025-02-13T19:32:27.109159960Z" level=info msg="Starting up" Feb 13 19:32:27.172383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:32:27.182689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:27.250012 dockerd[1704]: time="2025-02-13T19:32:27.249952584Z" level=info msg="Loading containers: start." Feb 13 19:32:27.408164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:27.413913 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:32:27.460421 kubelet[1771]: E0213 19:32:27.460212 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:32:27.467753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:32:27.467965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:32:27.686542 kernel: Initializing XFRM netlink socket Feb 13 19:32:27.781248 systemd-networkd[1407]: docker0: Link UP Feb 13 19:32:27.832844 dockerd[1704]: time="2025-02-13T19:32:27.832762229Z" level=info msg="Loading containers: done." Feb 13 19:32:27.850018 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3245040228-merged.mount: Deactivated successfully. Feb 13 19:32:27.853347 dockerd[1704]: time="2025-02-13T19:32:27.853240063Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:32:27.853473 dockerd[1704]: time="2025-02-13T19:32:27.853428627Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:32:27.853640 dockerd[1704]: time="2025-02-13T19:32:27.853612014Z" level=info msg="Daemon has completed initialization" Feb 13 19:32:27.892792 dockerd[1704]: time="2025-02-13T19:32:27.892719936Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:32:27.892934 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:32:28.635013 containerd[1495]: time="2025-02-13T19:32:28.634977938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:32:29.297556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486804657.mount: Deactivated successfully. Feb 13 19:32:30.380712 containerd[1495]: time="2025-02-13T19:32:30.380658397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:30.381449 containerd[1495]: time="2025-02-13T19:32:30.381391468Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:32:30.382576 containerd[1495]: time="2025-02-13T19:32:30.382546174Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:30.385301 containerd[1495]: time="2025-02-13T19:32:30.385267901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:30.386336 containerd[1495]: time="2025-02-13T19:32:30.386308602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.751295846s" Feb 13 19:32:30.386380 containerd[1495]: time="2025-02-13T19:32:30.386337429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:32:30.406445 containerd[1495]: time="2025-02-13T19:32:30.406405188Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:32:32.520012 containerd[1495]: time="2025-02-13T19:32:32.519935140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:32.520973 containerd[1495]: time="2025-02-13T19:32:32.520908787Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:32:32.522153 containerd[1495]: time="2025-02-13T19:32:32.522115500Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:32.524835 containerd[1495]: time="2025-02-13T19:32:32.524793865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:32.525852 containerd[1495]: time="2025-02-13T19:32:32.525811496Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.119374531s" Feb 13 19:32:32.525852 containerd[1495]: time="2025-02-13T19:32:32.525837750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:32:32.548593 containerd[1495]: time="2025-02-13T19:32:32.548550101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:32:33.496053 containerd[1495]: time="2025-02-13T19:32:33.495989430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:33.496886 containerd[1495]: time="2025-02-13T19:32:33.496808051Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:32:33.497968 containerd[1495]: time="2025-02-13T19:32:33.497936090Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:33.500769 containerd[1495]: time="2025-02-13T19:32:33.500729597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:33.501641 containerd[1495]: time="2025-02-13T19:32:33.501616642Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 953.024836ms" Feb 13 19:32:33.501699 containerd[1495]: time="2025-02-13T19:32:33.501642509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:32:33.523280 containerd[1495]: time="2025-02-13T19:32:33.523224010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:32:34.556455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583812159.mount: Deactivated successfully. Feb 13 19:32:34.827420 containerd[1495]: time="2025-02-13T19:32:34.827282245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:34.828127 containerd[1495]: time="2025-02-13T19:32:34.828065399Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:32:34.829302 containerd[1495]: time="2025-02-13T19:32:34.829264018Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:34.831281 containerd[1495]: time="2025-02-13T19:32:34.831247000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:34.831853 containerd[1495]: time="2025-02-13T19:32:34.831817825Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.308545263s" Feb 13 19:32:34.831891 containerd[1495]: time="2025-02-13T19:32:34.831851667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:32:34.853037 containerd[1495]: time="2025-02-13T19:32:34.852972744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:32:35.417653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072006277.mount: Deactivated successfully. Feb 13 19:32:36.469069 containerd[1495]: time="2025-02-13T19:32:36.469013135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:36.470566 containerd[1495]: time="2025-02-13T19:32:36.470514856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:32:36.471782 containerd[1495]: time="2025-02-13T19:32:36.471752529Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:36.474487 containerd[1495]: time="2025-02-13T19:32:36.474436324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:36.475731 containerd[1495]: time="2025-02-13T19:32:36.475679461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.622671563s" Feb 13 19:32:36.475785 containerd[1495]: time="2025-02-13T19:32:36.475731588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:32:36.497945 containerd[1495]: time="2025-02-13T19:32:36.497894572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:32:37.006387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517899251.mount: Deactivated successfully. Feb 13 19:32:37.012133 containerd[1495]: time="2025-02-13T19:32:37.012094666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:37.012897 containerd[1495]: time="2025-02-13T19:32:37.012851719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:32:37.014029 containerd[1495]: time="2025-02-13T19:32:37.013983786Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:37.016122 containerd[1495]: time="2025-02-13T19:32:37.016091535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:37.016930 containerd[1495]: time="2025-02-13T19:32:37.016891313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 518.963864ms" Feb 13 19:32:37.017013 containerd[1495]: time="2025-02-13T19:32:37.016928115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:32:37.037998 containerd[1495]: time="2025-02-13T19:32:37.037815067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:32:37.470825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:32:37.479736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:37.648628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:37.653289 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:32:37.746197 kubelet[2090]: E0213 19:32:37.746045 2090 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:32:37.751172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:32:37.751419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:32:37.822362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533180627.mount: Deactivated successfully. Feb 13 19:32:39.548584 containerd[1495]: time="2025-02-13T19:32:39.548514593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:39.549517 containerd[1495]: time="2025-02-13T19:32:39.549420094Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:32:39.550473 containerd[1495]: time="2025-02-13T19:32:39.550440732Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:39.553371 containerd[1495]: time="2025-02-13T19:32:39.553330797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:39.554460 containerd[1495]: time="2025-02-13T19:32:39.554424419Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.516571498s" Feb 13 19:32:39.554515 containerd[1495]: time="2025-02-13T19:32:39.554461747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:32:42.376454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:42.386810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:42.404596 systemd[1]: Reloading requested from client PID 2227 ('systemctl') (unit session-9.scope)... Feb 13 19:32:42.404613 systemd[1]: Reloading... Feb 13 19:32:42.500589 zram_generator::config[2269]: No configuration found. Feb 13 19:32:42.763348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:42.840945 systemd[1]: Reloading finished in 435 ms. Feb 13 19:32:42.910650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:32:42.910747 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:32:42.911014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:42.914002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:43.069924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:43.075810 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:32:43.115765 kubelet[2315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:32:43.115765 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:32:43.115765 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:32:43.116184 kubelet[2315]: I0213 19:32:43.115796 2315 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:32:43.432589 kubelet[2315]: I0213 19:32:43.432456 2315 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:32:43.432589 kubelet[2315]: I0213 19:32:43.432515 2315 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:32:43.432789 kubelet[2315]: I0213 19:32:43.432732 2315 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:32:43.449253 kubelet[2315]: I0213 19:32:43.449166 2315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:32:43.449863 kubelet[2315]: E0213 19:32:43.449825 2315 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.463952 kubelet[2315]: I0213 19:32:43.463900 2315 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:32:43.465081 kubelet[2315]: I0213 19:32:43.465033 2315 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:32:43.465256 kubelet[2315]: I0213 19:32:43.465069 2315 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:32:43.465672 kubelet[2315]: I0213 19:32:43.465648 2315 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:32:43.465672 kubelet[2315]: I0213 19:32:43.465664 2315 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:32:43.465835 kubelet[2315]: I0213 19:32:43.465811 2315 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:32:43.466466 kubelet[2315]: I0213 19:32:43.466443 2315 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:32:43.466466 kubelet[2315]: I0213 19:32:43.466459 2315 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:32:43.466556 kubelet[2315]: I0213 19:32:43.466481 2315 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:32:43.466556 kubelet[2315]: I0213 19:32:43.466511 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:32:43.469876 kubelet[2315]: W0213 19:32:43.469773 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.469876 kubelet[2315]: E0213 19:32:43.469831 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.470036 kubelet[2315]: W0213 19:32:43.469933 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.470036 kubelet[2315]: E0213 19:32:43.470001 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.470998 kubelet[2315]: I0213 19:32:43.470954 2315 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:32:43.472275 kubelet[2315]: I0213 19:32:43.472239 2315 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:32:43.472451 kubelet[2315]: W0213 19:32:43.472303 2315 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:32:43.473092 kubelet[2315]: I0213 19:32:43.473063 2315 server.go:1264] "Started kubelet" Feb 13 19:32:43.476672 kubelet[2315]: I0213 19:32:43.476579 2315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:32:43.476973 kubelet[2315]: I0213 19:32:43.476921 2315 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:32:43.477540 kubelet[2315]: I0213 19:32:43.477486 2315 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:32:43.478522 kubelet[2315]: I0213 19:32:43.478475 2315 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:32:43.479971 kubelet[2315]: E0213 19:32:43.479941 2315 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:32:43.480759 kubelet[2315]: I0213 19:32:43.480717 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:32:43.481785 kubelet[2315]: E0213 19:32:43.481665 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db71ee6cbf0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:32:43.47303502 +0000 UTC m=+0.393253613,LastTimestamp:2025-02-13 19:32:43.47303502 +0000 UTC m=+0.393253613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:32:43.481905 kubelet[2315]: I0213 19:32:43.481838 2315 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:32:43.483520 kubelet[2315]: I0213 19:32:43.482184 2315 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:32:43.483520 kubelet[2315]: I0213 19:32:43.482278 2315 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:32:43.483520 kubelet[2315]: E0213 19:32:43.482743 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Feb 13 19:32:43.483520 kubelet[2315]: W0213 19:32:43.482871 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.483520 kubelet[2315]: E0213 19:32:43.482931 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.485186 kubelet[2315]: I0213 19:32:43.485147 2315 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:32:43.485186 kubelet[2315]: I0213 19:32:43.485174 2315 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:32:43.485366 kubelet[2315]: I0213 19:32:43.485280 2315 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:32:43.500233 kubelet[2315]: I0213 19:32:43.500173 2315 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:32:43.500233 kubelet[2315]: I0213 19:32:43.500199 2315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:32:43.500233 kubelet[2315]: I0213 19:32:43.500221 2315 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:32:43.500409 kubelet[2315]: I0213 19:32:43.500270 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:32:43.501626 kubelet[2315]: I0213 19:32:43.501582 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:32:43.501626 kubelet[2315]: I0213 19:32:43.501627 2315 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:32:43.501762 kubelet[2315]: I0213 19:32:43.501649 2315 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:32:43.501762 kubelet[2315]: E0213 19:32:43.501695 2315 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:32:43.502356 kubelet[2315]: W0213 19:32:43.502313 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.502403 kubelet[2315]: E0213 19:32:43.502363 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:43.583183 kubelet[2315]: I0213 19:32:43.583131 2315 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:43.583550 kubelet[2315]: E0213 19:32:43.583485 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Feb 13 19:32:43.586041 kubelet[2315]: E0213 19:32:43.585940 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db71ee6cbf0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:32:43.47303502 +0000 UTC m=+0.393253613,LastTimestamp:2025-02-13 19:32:43.47303502 +0000 UTC m=+0.393253613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:32:43.602282 kubelet[2315]: E0213 19:32:43.602195 2315 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:32:43.684247 kubelet[2315]: E0213 19:32:43.684108 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Feb 13 19:32:43.785559 kubelet[2315]: I0213 19:32:43.785481 2315 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:43.785986 kubelet[2315]: E0213 19:32:43.785940 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Feb 13 19:32:43.803036 kubelet[2315]: E0213 19:32:43.802990 2315 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:32:43.919332 kubelet[2315]: I0213 19:32:43.919256 2315 policy_none.go:49] "None policy: Start" Feb 13 19:32:43.920233 kubelet[2315]: I0213 19:32:43.920206 2315 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:32:43.920291 kubelet[2315]: I0213 19:32:43.920272 2315 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:32:43.927186 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:32:43.946358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:32:43.950114 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:32:43.966792 kubelet[2315]: I0213 19:32:43.966730 2315 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:32:43.967073 kubelet[2315]: I0213 19:32:43.967009 2315 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:32:43.967136 kubelet[2315]: I0213 19:32:43.967116 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:32:43.968336 kubelet[2315]: E0213 19:32:43.968294 2315 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:32:44.085007 kubelet[2315]: E0213 19:32:44.084939 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Feb 13 19:32:44.187628 kubelet[2315]: I0213 19:32:44.187590 2315 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:44.188097 kubelet[2315]: E0213 19:32:44.187959 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Feb 13 19:32:44.204245 kubelet[2315]: I0213 19:32:44.204102 2315 topology_manager.go:215] "Topology Admit Handler" podUID="b7cb27d76dab88dc2bf4e2df211878ae" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:32:44.205587 kubelet[2315]: I0213 19:32:44.205552 2315 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:32:44.206732 kubelet[2315]: I0213 19:32:44.206372 2315 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:32:44.211783 systemd[1]: Created slice kubepods-burstable-podb7cb27d76dab88dc2bf4e2df211878ae.slice - libcontainer container kubepods-burstable-podb7cb27d76dab88dc2bf4e2df211878ae.slice. Feb 13 19:32:44.241977 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:32:44.246117 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:32:44.286205 kubelet[2315]: I0213 19:32:44.286127 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:44.286205 kubelet[2315]: I0213 19:32:44.286183 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:44.286205 kubelet[2315]: I0213 19:32:44.286203 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:44.286205 kubelet[2315]: I0213 19:32:44.286222 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:44.286483 kubelet[2315]: I0213 19:32:44.286240 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:44.286483 kubelet[2315]: I0213 19:32:44.286258 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:44.286483 kubelet[2315]: I0213 19:32:44.286274 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:44.286483 kubelet[2315]: I0213 19:32:44.286291 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:44.286483 kubelet[2315]: I0213 19:32:44.286307 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:32:44.320760 kubelet[2315]: W0213 19:32:44.320691 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.320760 kubelet[2315]: E0213 19:32:44.320761 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.495171 kubelet[2315]: W0213 19:32:44.495092 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.495171 kubelet[2315]: E0213 19:32:44.495157 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.538440 kubelet[2315]: E0213 19:32:44.538383 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:44.539072 containerd[1495]: time="2025-02-13T19:32:44.539031322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7cb27d76dab88dc2bf4e2df211878ae,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:44.545363 kubelet[2315]: E0213 19:32:44.545330 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:44.545958 containerd[1495]: time="2025-02-13T19:32:44.545895224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:44.548154 kubelet[2315]: E0213 19:32:44.548131 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:44.548553 containerd[1495]: time="2025-02-13T19:32:44.548514116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:44.818919 kubelet[2315]: W0213 19:32:44.818754 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.818919 kubelet[2315]: E0213 19:32:44.818814 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:44.886537 kubelet[2315]: E0213 19:32:44.886446 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Feb 13 19:32:44.989966 kubelet[2315]: I0213 19:32:44.989930 2315 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:44.990331 kubelet[2315]: E0213 19:32:44.990288 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Feb 13 19:32:45.006265 kubelet[2315]: W0213 19:32:45.003618 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:45.006265 kubelet[2315]: E0213 19:32:45.006263 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:45.611832 kubelet[2315]: E0213 19:32:45.611794 2315 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.22:6443: connect: connection refused Feb 13 19:32:45.654623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233678108.mount: Deactivated successfully. Feb 13 19:32:45.661008 containerd[1495]: time="2025-02-13T19:32:45.660960645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:32:45.663968 containerd[1495]: time="2025-02-13T19:32:45.663885451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:32:45.665227 containerd[1495]: time="2025-02-13T19:32:45.665180873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:32:45.667415 containerd[1495]: time="2025-02-13T19:32:45.667383515Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:32:45.668888 containerd[1495]: time="2025-02-13T19:32:45.668848477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:32:45.670022 containerd[1495]: time="2025-02-13T19:32:45.669967302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:32:45.670726 containerd[1495]: time="2025-02-13T19:32:45.670691278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:32:45.672980 containerd[1495]: time="2025-02-13T19:32:45.672957501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:32:45.674385 containerd[1495]: time="2025-02-13T19:32:45.674349681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.135231457s" Feb 13 19:32:45.675406 containerd[1495]: time="2025-02-13T19:32:45.675325578Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.129302107s" Feb 13 19:32:45.679870 containerd[1495]: time="2025-02-13T19:32:45.679829892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.131255871s" Feb 13 19:32:45.943527 containerd[1495]: time="2025-02-13T19:32:45.943243549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:45.943527 containerd[1495]: time="2025-02-13T19:32:45.943339617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:45.943527 containerd[1495]: time="2025-02-13T19:32:45.943358908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.944071 containerd[1495]: time="2025-02-13T19:32:45.943451791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.945491 containerd[1495]: time="2025-02-13T19:32:45.942554412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:45.945491 containerd[1495]: time="2025-02-13T19:32:45.945452939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:45.945491 containerd[1495]: time="2025-02-13T19:32:45.945465013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.945738 containerd[1495]: time="2025-02-13T19:32:45.945701255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.951820 containerd[1495]: time="2025-02-13T19:32:45.951565193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:45.951820 containerd[1495]: time="2025-02-13T19:32:45.951622157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:45.951820 containerd[1495]: time="2025-02-13T19:32:45.951635552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.951820 containerd[1495]: time="2025-02-13T19:32:45.951720697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:45.978049 systemd[1]: Started cri-containerd-d958e4cecc4a82e75f35a01fdaefd48099dbc12f03d6654dfcebb09b04bcddbe.scope - libcontainer container d958e4cecc4a82e75f35a01fdaefd48099dbc12f03d6654dfcebb09b04bcddbe. Feb 13 19:32:45.981882 systemd[1]: Started cri-containerd-3e88a228788b51c5771658a2b7d72e893bb362363a895a6eaf474ddda8da5e9b.scope - libcontainer container 3e88a228788b51c5771658a2b7d72e893bb362363a895a6eaf474ddda8da5e9b. Feb 13 19:32:45.986186 systemd[1]: Started cri-containerd-2113ef817932120670b92cb9adfe20da4014c6361982229eaafca01516e5fad6.scope - libcontainer container 2113ef817932120670b92cb9adfe20da4014c6361982229eaafca01516e5fad6. Feb 13 19:32:46.034399 containerd[1495]: time="2025-02-13T19:32:46.034314212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"d958e4cecc4a82e75f35a01fdaefd48099dbc12f03d6654dfcebb09b04bcddbe\"" Feb 13 19:32:46.037751 kubelet[2315]: E0213 19:32:46.037105 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.042926 containerd[1495]: time="2025-02-13T19:32:46.042883611Z" level=info msg="CreateContainer within sandbox \"d958e4cecc4a82e75f35a01fdaefd48099dbc12f03d6654dfcebb09b04bcddbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:32:46.043254 containerd[1495]: time="2025-02-13T19:32:46.042904585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7cb27d76dab88dc2bf4e2df211878ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2113ef817932120670b92cb9adfe20da4014c6361982229eaafca01516e5fad6\"" Feb 13 19:32:46.044520 kubelet[2315]: E0213 19:32:46.044451 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.047381 containerd[1495]: time="2025-02-13T19:32:46.047330634Z" level=info msg="CreateContainer within sandbox \"2113ef817932120670b92cb9adfe20da4014c6361982229eaafca01516e5fad6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:32:46.050290 containerd[1495]: time="2025-02-13T19:32:46.050247921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e88a228788b51c5771658a2b7d72e893bb362363a895a6eaf474ddda8da5e9b\"" Feb 13 19:32:46.050807 kubelet[2315]: E0213 19:32:46.050783 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.053062 containerd[1495]: time="2025-02-13T19:32:46.053034245Z" level=info msg="CreateContainer within sandbox \"3e88a228788b51c5771658a2b7d72e893bb362363a895a6eaf474ddda8da5e9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:32:46.067893 containerd[1495]: time="2025-02-13T19:32:46.067836810Z" level=info msg="CreateContainer within sandbox \"2113ef817932120670b92cb9adfe20da4014c6361982229eaafca01516e5fad6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"149bacca2699734a8eebb84accfac26856249f5b2810d42b70f9cc20553ae5ef\"" Feb 13 19:32:46.068698 containerd[1495]: time="2025-02-13T19:32:46.068615975Z" level=info msg="StartContainer for \"149bacca2699734a8eebb84accfac26856249f5b2810d42b70f9cc20553ae5ef\"" Feb 13 19:32:46.076003 containerd[1495]: time="2025-02-13T19:32:46.075964726Z" level=info msg="CreateContainer within sandbox \"d958e4cecc4a82e75f35a01fdaefd48099dbc12f03d6654dfcebb09b04bcddbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca6601d7c622af1783877fd3768bd7309f46a4ca3b0fc6b8e080766c1fd8b870\"" Feb 13 19:32:46.076633 containerd[1495]: time="2025-02-13T19:32:46.076615421Z" level=info msg="StartContainer for \"ca6601d7c622af1783877fd3768bd7309f46a4ca3b0fc6b8e080766c1fd8b870\"" Feb 13 19:32:46.085038 containerd[1495]: time="2025-02-13T19:32:46.084903554Z" level=info msg="CreateContainer within sandbox \"3e88a228788b51c5771658a2b7d72e893bb362363a895a6eaf474ddda8da5e9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f242eea1e95c326f6d725fa2992198a6b37b58aa32f185fb67dec8c8a7391ab9\"" Feb 13 19:32:46.085346 containerd[1495]: time="2025-02-13T19:32:46.085324833Z" level=info msg="StartContainer for \"f242eea1e95c326f6d725fa2992198a6b37b58aa32f185fb67dec8c8a7391ab9\"" Feb 13 19:32:46.101685 systemd[1]: Started cri-containerd-149bacca2699734a8eebb84accfac26856249f5b2810d42b70f9cc20553ae5ef.scope - libcontainer container 149bacca2699734a8eebb84accfac26856249f5b2810d42b70f9cc20553ae5ef. Feb 13 19:32:46.115736 systemd[1]: Started cri-containerd-ca6601d7c622af1783877fd3768bd7309f46a4ca3b0fc6b8e080766c1fd8b870.scope - libcontainer container ca6601d7c622af1783877fd3768bd7309f46a4ca3b0fc6b8e080766c1fd8b870. Feb 13 19:32:46.117371 systemd[1]: Started cri-containerd-f242eea1e95c326f6d725fa2992198a6b37b58aa32f185fb67dec8c8a7391ab9.scope - libcontainer container f242eea1e95c326f6d725fa2992198a6b37b58aa32f185fb67dec8c8a7391ab9. Feb 13 19:32:46.179091 containerd[1495]: time="2025-02-13T19:32:46.179038781Z" level=info msg="StartContainer for \"ca6601d7c622af1783877fd3768bd7309f46a4ca3b0fc6b8e080766c1fd8b870\" returns successfully" Feb 13 19:32:46.179920 containerd[1495]: time="2025-02-13T19:32:46.179039792Z" level=info msg="StartContainer for \"149bacca2699734a8eebb84accfac26856249f5b2810d42b70f9cc20553ae5ef\" returns successfully" Feb 13 19:32:46.179920 containerd[1495]: time="2025-02-13T19:32:46.179072800Z" level=info msg="StartContainer for \"f242eea1e95c326f6d725fa2992198a6b37b58aa32f185fb67dec8c8a7391ab9\" returns successfully" Feb 13 19:32:46.514809 kubelet[2315]: E0213 19:32:46.514769 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.515114 kubelet[2315]: E0213 19:32:46.515071 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.515699 kubelet[2315]: E0213 19:32:46.515555 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:46.593106 kubelet[2315]: I0213 19:32:46.592586 2315 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:47.308617 kubelet[2315]: E0213 19:32:47.308572 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:32:47.404066 kubelet[2315]: I0213 19:32:47.403962 2315 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:32:47.410918 kubelet[2315]: E0213 19:32:47.410889 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:47.511987 kubelet[2315]: E0213 19:32:47.511920 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:47.516646 kubelet[2315]: E0213 19:32:47.516617 2315 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:47.612980 kubelet[2315]: E0213 19:32:47.612830 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:47.713599 kubelet[2315]: E0213 19:32:47.713548 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:47.814158 kubelet[2315]: E0213 19:32:47.814107 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:47.914831 kubelet[2315]: E0213 19:32:47.914700 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:48.014845 kubelet[2315]: E0213 19:32:48.014787 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:48.115469 kubelet[2315]: E0213 19:32:48.115396 2315 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:32:48.472555 kubelet[2315]: I0213 19:32:48.472516 2315 apiserver.go:52] "Watching apiserver" Feb 13 19:32:48.483347 kubelet[2315]: I0213 19:32:48.483313 2315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:32:49.510548 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-9.scope)... Feb 13 19:32:49.510569 systemd[1]: Reloading... Feb 13 19:32:49.597637 zram_generator::config[2636]: No configuration found. Feb 13 19:32:49.712428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:49.805034 systemd[1]: Reloading finished in 294 ms. Feb 13 19:32:49.853727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:49.868028 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:32:49.868342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:49.876918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:50.021737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:50.028868 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:32:50.093058 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:32:50.093058 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:32:50.093058 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:32:50.093420 kubelet[2681]: I0213 19:32:50.093049 2681 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:32:50.100588 kubelet[2681]: I0213 19:32:50.100539 2681 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:32:50.100588 kubelet[2681]: I0213 19:32:50.100571 2681 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:32:50.100786 kubelet[2681]: I0213 19:32:50.100763 2681 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:32:50.103146 kubelet[2681]: I0213 19:32:50.103115 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:32:50.104313 kubelet[2681]: I0213 19:32:50.104200 2681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:32:50.111227 kubelet[2681]: I0213 19:32:50.111198 2681 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:32:50.111431 kubelet[2681]: I0213 19:32:50.111402 2681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:32:50.111597 kubelet[2681]: I0213 19:32:50.111428 2681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:32:50.111678 kubelet[2681]: I0213 19:32:50.111604 2681 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:32:50.111678 kubelet[2681]: I0213 19:32:50.111613 2681 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:32:50.111678 kubelet[2681]: I0213 19:32:50.111658 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:32:50.111772 kubelet[2681]: I0213 19:32:50.111744 2681 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:32:50.111772 kubelet[2681]: I0213 19:32:50.111754 2681 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:32:50.111820 kubelet[2681]: I0213 19:32:50.111776 2681 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:32:50.111820 kubelet[2681]: I0213 19:32:50.111794 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:32:50.112693 kubelet[2681]: I0213 19:32:50.112546 2681 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:32:50.113090 kubelet[2681]: I0213 19:32:50.113076 2681 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:32:50.113565 kubelet[2681]: I0213 19:32:50.113552 2681 server.go:1264] "Started kubelet" Feb 13 19:32:50.115403 kubelet[2681]: I0213 19:32:50.115391 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:32:50.116055 kubelet[2681]: I0213 19:32:50.116030 2681 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:32:50.116173 kubelet[2681]: I0213 19:32:50.116135 2681 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:32:50.116288 kubelet[2681]: I0213 19:32:50.116268 2681 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:32:50.118413 kubelet[2681]: I0213 19:32:50.117051 2681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:32:50.120548 kubelet[2681]: I0213 19:32:50.118721 2681 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:32:50.120548 kubelet[2681]: I0213 19:32:50.119772 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:32:50.120548 kubelet[2681]: I0213 19:32:50.119966 2681 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:32:50.127522 kubelet[2681]: I0213 19:32:50.127476 2681 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:32:50.127608 kubelet[2681]: I0213 19:32:50.127581 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:32:50.129243 kubelet[2681]: E0213 19:32:50.129198 2681 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:32:50.129842 kubelet[2681]: I0213 19:32:50.129406 2681 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:32:50.134890 kubelet[2681]: I0213 19:32:50.134850 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:32:50.136336 kubelet[2681]: I0213 19:32:50.136302 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:32:50.136336 kubelet[2681]: I0213 19:32:50.136332 2681 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:32:50.136405 kubelet[2681]: I0213 19:32:50.136347 2681 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:32:50.136430 kubelet[2681]: E0213 19:32:50.136402 2681 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:32:50.161954 kubelet[2681]: I0213 19:32:50.161920 2681 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:32:50.161954 kubelet[2681]: I0213 19:32:50.161939 2681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:32:50.161954 kubelet[2681]: I0213 19:32:50.161956 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:32:50.162131 kubelet[2681]: I0213 19:32:50.162119 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:32:50.162518 kubelet[2681]: I0213 19:32:50.162128 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:32:50.162518 kubelet[2681]: I0213 19:32:50.162147 2681 policy_none.go:49] "None policy: Start" Feb 13 19:32:50.162692 kubelet[2681]: I0213 19:32:50.162651 2681 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:32:50.162692 kubelet[2681]: I0213 19:32:50.162670 2681 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:32:50.162791 kubelet[2681]: I0213 19:32:50.162775 2681 state_mem.go:75] "Updated machine memory state" Feb 13 19:32:50.166918 kubelet[2681]: I0213 19:32:50.166786 2681 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:32:50.167046 kubelet[2681]: I0213 19:32:50.166977 2681 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:32:50.167202 kubelet[2681]: I0213 19:32:50.167089 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:32:50.220433 kubelet[2681]: I0213 19:32:50.220397 2681 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:32:50.226046 kubelet[2681]: I0213 19:32:50.226022 2681 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:32:50.226106 kubelet[2681]: I0213 19:32:50.226084 2681 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:32:50.237618 kubelet[2681]: I0213 19:32:50.237573 2681 topology_manager.go:215] "Topology Admit Handler" podUID="b7cb27d76dab88dc2bf4e2df211878ae" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:32:50.238167 kubelet[2681]: I0213 19:32:50.237829 2681 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:32:50.238307 kubelet[2681]: I0213 19:32:50.238293 2681 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:32:50.417174 kubelet[2681]: I0213 19:32:50.417036 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:50.417174 kubelet[2681]: I0213 19:32:50.417088 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:50.417174 kubelet[2681]: I0213 19:32:50.417120 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:32:50.417174 kubelet[2681]: I0213 19:32:50.417139 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:50.417174 kubelet[2681]: I0213 19:32:50.417155 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:50.417401 kubelet[2681]: I0213 19:32:50.417171 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:50.417401 kubelet[2681]: I0213 19:32:50.417190 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:32:50.417401 kubelet[2681]: I0213 19:32:50.417207 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:50.417401 kubelet[2681]: I0213 19:32:50.417225 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7cb27d76dab88dc2bf4e2df211878ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7cb27d76dab88dc2bf4e2df211878ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:50.522491 sudo[2716]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:32:50.523022 sudo[2716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:32:50.554067 kubelet[2681]: E0213 19:32:50.554032 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:50.554573 kubelet[2681]: E0213 19:32:50.554463 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:50.554573 kubelet[2681]: E0213 19:32:50.554489 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:50.980016 sudo[2716]: pam_unix(sudo:session): session closed for user root Feb 13 19:32:51.112358 kubelet[2681]: I0213 19:32:51.112319 2681 apiserver.go:52] "Watching apiserver" Feb 13 19:32:51.116470 kubelet[2681]: I0213 19:32:51.116450 2681 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:32:51.150618 kubelet[2681]: E0213 19:32:51.150533 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:51.150618 kubelet[2681]: E0213 19:32:51.150550 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:51.155150 kubelet[2681]: E0213 19:32:51.155126 2681 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:32:51.155492 kubelet[2681]: E0213 19:32:51.155459 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:51.168885 kubelet[2681]: I0213 19:32:51.168842 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.168827612 podStartE2EDuration="1.168827612s" podCreationTimestamp="2025-02-13 19:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:32:51.16812599 +0000 UTC m=+1.134347436" watchObservedRunningTime="2025-02-13 19:32:51.168827612 +0000 UTC m=+1.135049058" Feb 13 19:32:51.174083 kubelet[2681]: I0213 19:32:51.174028 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.174011633 podStartE2EDuration="1.174011633s" podCreationTimestamp="2025-02-13 19:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:32:51.173433458 +0000 UTC m=+1.139654905" watchObservedRunningTime="2025-02-13 19:32:51.174011633 +0000 UTC m=+1.140233079" Feb 13 19:32:51.180727 kubelet[2681]: I0213 19:32:51.180391 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.180372892 podStartE2EDuration="1.180372892s" podCreationTimestamp="2025-02-13 19:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:32:51.180227995 +0000 UTC m=+1.146449441" watchObservedRunningTime="2025-02-13 19:32:51.180372892 +0000 UTC m=+1.146594338" Feb 13 19:32:52.152037 kubelet[2681]: E0213 19:32:52.151992 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:52.400799 sudo[1684]: pam_unix(sudo:session): session closed for user root Feb 13 19:32:52.402275 sshd[1683]: Connection closed by 10.0.0.1 port 33528 Feb 13 19:32:52.402598 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:52.407100 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:33528.service: Deactivated successfully. Feb 13 19:32:52.409256 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:32:52.409446 systemd[1]: session-9.scope: Consumed 5.216s CPU time, 190.8M memory peak, 0B memory swap peak. Feb 13 19:32:52.410070 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:32:52.411022 systemd-logind[1471]: Removed session 9. Feb 13 19:32:53.309540 kubelet[2681]: E0213 19:32:53.307176 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:56.339622 kubelet[2681]: E0213 19:32:56.339572 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:56.967463 kubelet[2681]: E0213 19:32:56.967420 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:57.158894 kubelet[2681]: E0213 19:32:57.158862 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:57.159642 kubelet[2681]: E0213 19:32:57.159620 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:32:59.935775 update_engine[1472]: I20250213 19:32:59.935676 1472 update_attempter.cc:509] Updating boot flags... Feb 13 19:33:00.006555 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2763) Feb 13 19:33:00.044638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2761) Feb 13 19:33:00.082525 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2761) Feb 13 19:33:03.310690 kubelet[2681]: E0213 19:33:03.310655 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:03.426713 kubelet[2681]: I0213 19:33:03.426676 2681 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:33:03.427045 containerd[1495]: time="2025-02-13T19:33:03.427011629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:33:03.427466 kubelet[2681]: I0213 19:33:03.427201 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:33:04.091879 kubelet[2681]: I0213 19:33:04.091779 2681 topology_manager.go:215] "Topology Admit Handler" podUID="28b3eede-8e4f-4452-b744-5db9ad84d9c8" podNamespace="kube-system" podName="kube-proxy-k4g52" Feb 13 19:33:04.098360 kubelet[2681]: I0213 19:33:04.098300 2681 topology_manager.go:215] "Topology Admit Handler" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" podNamespace="kube-system" podName="cilium-zk74w" Feb 13 19:33:04.099791 kubelet[2681]: I0213 19:33:04.099321 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28b3eede-8e4f-4452-b744-5db9ad84d9c8-xtables-lock\") pod \"kube-proxy-k4g52\" (UID: \"28b3eede-8e4f-4452-b744-5db9ad84d9c8\") " pod="kube-system/kube-proxy-k4g52" Feb 13 19:33:04.099927 kubelet[2681]: I0213 19:33:04.099912 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28b3eede-8e4f-4452-b744-5db9ad84d9c8-lib-modules\") pod \"kube-proxy-k4g52\" (UID: \"28b3eede-8e4f-4452-b744-5db9ad84d9c8\") " pod="kube-system/kube-proxy-k4g52" Feb 13 19:33:04.100117 kubelet[2681]: I0213 19:33:04.100029 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822rl\" (UniqueName: \"kubernetes.io/projected/28b3eede-8e4f-4452-b744-5db9ad84d9c8-kube-api-access-822rl\") pod \"kube-proxy-k4g52\" (UID: \"28b3eede-8e4f-4452-b744-5db9ad84d9c8\") " pod="kube-system/kube-proxy-k4g52" Feb 13 19:33:04.100117 kubelet[2681]: I0213 19:33:04.100067 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28b3eede-8e4f-4452-b744-5db9ad84d9c8-kube-proxy\") pod \"kube-proxy-k4g52\" (UID: \"28b3eede-8e4f-4452-b744-5db9ad84d9c8\") " pod="kube-system/kube-proxy-k4g52" Feb 13 19:33:04.106150 systemd[1]: Created slice kubepods-besteffort-pod28b3eede_8e4f_4452_b744_5db9ad84d9c8.slice - libcontainer container kubepods-besteffort-pod28b3eede_8e4f_4452_b744_5db9ad84d9c8.slice. Feb 13 19:33:04.127364 systemd[1]: Created slice kubepods-burstable-pod0a7d6744_5e0d_4db8_8323_bf47bcf1c7d6.slice - libcontainer container kubepods-burstable-pod0a7d6744_5e0d_4db8_8323_bf47bcf1c7d6.slice. Feb 13 19:33:04.201080 kubelet[2681]: I0213 19:33:04.200997 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-bpf-maps\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201080 kubelet[2681]: I0213 19:33:04.201074 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hubble-tls\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201280 kubelet[2681]: I0213 19:33:04.201122 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-etc-cni-netd\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201280 kubelet[2681]: I0213 19:33:04.201206 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-kernel\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201280 kubelet[2681]: I0213 19:33:04.201265 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29qx\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-kube-api-access-h29qx\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201427 kubelet[2681]: I0213 19:33:04.201390 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-lib-modules\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201469 kubelet[2681]: I0213 19:33:04.201426 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-net\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201469 kubelet[2681]: I0213 19:33:04.201443 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hostproc\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201469 kubelet[2681]: I0213 19:33:04.201458 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-cgroup\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201568 kubelet[2681]: I0213 19:33:04.201481 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-run\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201568 kubelet[2681]: I0213 19:33:04.201517 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cni-path\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201568 kubelet[2681]: I0213 19:33:04.201532 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-clustermesh-secrets\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201653 kubelet[2681]: I0213 19:33:04.201581 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-xtables-lock\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.201653 kubelet[2681]: I0213 19:33:04.201595 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-config-path\") pod \"cilium-zk74w\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " pod="kube-system/cilium-zk74w" Feb 13 19:33:04.423671 kubelet[2681]: E0213 19:33:04.423480 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:04.424478 containerd[1495]: time="2025-02-13T19:33:04.424420041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4g52,Uid:28b3eede-8e4f-4452-b744-5db9ad84d9c8,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:04.432643 kubelet[2681]: E0213 19:33:04.432283 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:04.434154 containerd[1495]: time="2025-02-13T19:33:04.434114265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk74w,Uid:0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:04.468333 kubelet[2681]: I0213 19:33:04.466646 2681 topology_manager.go:215] "Topology Admit Handler" podUID="9716826f-40e7-4295-93c9-ec67bbb84691" podNamespace="kube-system" podName="cilium-operator-599987898-vxrzf" Feb 13 19:33:04.474414 containerd[1495]: time="2025-02-13T19:33:04.474254772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:04.474414 containerd[1495]: time="2025-02-13T19:33:04.474371245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:04.474414 containerd[1495]: time="2025-02-13T19:33:04.474387392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.474615 containerd[1495]: time="2025-02-13T19:33:04.474516596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.477050 systemd[1]: Created slice kubepods-besteffort-pod9716826f_40e7_4295_93c9_ec67bbb84691.slice - libcontainer container kubepods-besteffort-pod9716826f_40e7_4295_93c9_ec67bbb84691.slice. Feb 13 19:33:04.495046 containerd[1495]: time="2025-02-13T19:33:04.494939762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:04.495176 containerd[1495]: time="2025-02-13T19:33:04.495073944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:04.495176 containerd[1495]: time="2025-02-13T19:33:04.495109913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.495348 containerd[1495]: time="2025-02-13T19:33:04.495275968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.506727 systemd[1]: Started cri-containerd-f23570add82c5eaefc18e7e28197d7a76acbd5a580442c8b2e86be89f80fca42.scope - libcontainer container f23570add82c5eaefc18e7e28197d7a76acbd5a580442c8b2e86be89f80fca42. Feb 13 19:33:04.510491 systemd[1]: Started cri-containerd-eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0.scope - libcontainer container eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0. Feb 13 19:33:04.534211 containerd[1495]: time="2025-02-13T19:33:04.534082833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4g52,Uid:28b3eede-8e4f-4452-b744-5db9ad84d9c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f23570add82c5eaefc18e7e28197d7a76acbd5a580442c8b2e86be89f80fca42\"" Feb 13 19:33:04.535108 kubelet[2681]: E0213 19:33:04.535075 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:04.537482 containerd[1495]: time="2025-02-13T19:33:04.537362930Z" level=info msg="CreateContainer within sandbox \"f23570add82c5eaefc18e7e28197d7a76acbd5a580442c8b2e86be89f80fca42\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:33:04.540043 containerd[1495]: time="2025-02-13T19:33:04.539939076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk74w,Uid:0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\"" Feb 13 19:33:04.540821 kubelet[2681]: E0213 19:33:04.540773 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:04.542371 containerd[1495]: time="2025-02-13T19:33:04.542284430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:33:04.560394 containerd[1495]: time="2025-02-13T19:33:04.560340457Z" level=info msg="CreateContainer within sandbox \"f23570add82c5eaefc18e7e28197d7a76acbd5a580442c8b2e86be89f80fca42\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1967a723a9d2507ed34e42d39736bdcd49fdc63cb17771412f0c0e950bd4c896\"" Feb 13 19:33:04.560937 containerd[1495]: time="2025-02-13T19:33:04.560887831Z" level=info msg="StartContainer for \"1967a723a9d2507ed34e42d39736bdcd49fdc63cb17771412f0c0e950bd4c896\"" Feb 13 19:33:04.590637 systemd[1]: Started cri-containerd-1967a723a9d2507ed34e42d39736bdcd49fdc63cb17771412f0c0e950bd4c896.scope - libcontainer container 1967a723a9d2507ed34e42d39736bdcd49fdc63cb17771412f0c0e950bd4c896. Feb 13 19:33:04.604494 kubelet[2681]: I0213 19:33:04.604447 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9716826f-40e7-4295-93c9-ec67bbb84691-cilium-config-path\") pod \"cilium-operator-599987898-vxrzf\" (UID: \"9716826f-40e7-4295-93c9-ec67bbb84691\") " pod="kube-system/cilium-operator-599987898-vxrzf" Feb 13 19:33:04.604494 kubelet[2681]: I0213 19:33:04.604492 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vtr\" (UniqueName: \"kubernetes.io/projected/9716826f-40e7-4295-93c9-ec67bbb84691-kube-api-access-c7vtr\") pod \"cilium-operator-599987898-vxrzf\" (UID: \"9716826f-40e7-4295-93c9-ec67bbb84691\") " pod="kube-system/cilium-operator-599987898-vxrzf" Feb 13 19:33:04.624449 containerd[1495]: time="2025-02-13T19:33:04.624404434Z" level=info msg="StartContainer for \"1967a723a9d2507ed34e42d39736bdcd49fdc63cb17771412f0c0e950bd4c896\" returns successfully" Feb 13 19:33:04.780357 kubelet[2681]: E0213 19:33:04.780316 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:04.780683 containerd[1495]: time="2025-02-13T19:33:04.780655033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vxrzf,Uid:9716826f-40e7-4295-93c9-ec67bbb84691,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:04.908484 containerd[1495]: time="2025-02-13T19:33:04.907760558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:04.908484 containerd[1495]: time="2025-02-13T19:33:04.908438848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:04.908484 containerd[1495]: time="2025-02-13T19:33:04.908452831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.908710 containerd[1495]: time="2025-02-13T19:33:04.908613607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:04.928635 systemd[1]: Started cri-containerd-e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09.scope - libcontainer container e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09. Feb 13 19:33:04.963543 containerd[1495]: time="2025-02-13T19:33:04.963479078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vxrzf,Uid:9716826f-40e7-4295-93c9-ec67bbb84691,Namespace:kube-system,Attempt:0,} returns sandbox id \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\"" Feb 13 19:33:04.964301 kubelet[2681]: E0213 19:33:04.964278 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:05.174085 kubelet[2681]: E0213 19:33:05.173968 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:05.183820 kubelet[2681]: I0213 19:33:05.183751 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k4g52" podStartSLOduration=1.183732832 podStartE2EDuration="1.183732832s" podCreationTimestamp="2025-02-13 19:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:05.183075347 +0000 UTC m=+15.149296793" watchObservedRunningTime="2025-02-13 19:33:05.183732832 +0000 UTC m=+15.149954268" Feb 13 19:33:13.957213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989129226.mount: Deactivated successfully. Feb 13 19:33:16.735199 containerd[1495]: time="2025-02-13T19:33:16.735101617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:16.737639 containerd[1495]: time="2025-02-13T19:33:16.737581593Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:33:16.741858 containerd[1495]: time="2025-02-13T19:33:16.741817971Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:16.743656 containerd[1495]: time="2025-02-13T19:33:16.743330001Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.200950954s" Feb 13 19:33:16.743656 containerd[1495]: time="2025-02-13T19:33:16.743357299Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:33:16.745536 containerd[1495]: time="2025-02-13T19:33:16.745478458Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:33:16.751859 containerd[1495]: time="2025-02-13T19:33:16.751826910Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:33:16.771663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967354813.mount: Deactivated successfully. Feb 13 19:33:16.775577 containerd[1495]: time="2025-02-13T19:33:16.775538276Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\"" Feb 13 19:33:16.776131 containerd[1495]: time="2025-02-13T19:33:16.776097397Z" level=info msg="StartContainer for \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\"" Feb 13 19:33:16.816635 systemd[1]: Started cri-containerd-6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46.scope - libcontainer container 6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46. Feb 13 19:33:16.845781 containerd[1495]: time="2025-02-13T19:33:16.845729781Z" level=info msg="StartContainer for \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\" returns successfully" Feb 13 19:33:16.858643 systemd[1]: cri-containerd-6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46.scope: Deactivated successfully. Feb 13 19:33:17.297837 containerd[1495]: time="2025-02-13T19:33:17.297754924Z" level=info msg="shim disconnected" id=6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46 namespace=k8s.io Feb 13 19:33:17.297837 containerd[1495]: time="2025-02-13T19:33:17.297824127Z" level=warning msg="cleaning up after shim disconnected" id=6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46 namespace=k8s.io Feb 13 19:33:17.297837 containerd[1495]: time="2025-02-13T19:33:17.297833093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:33:17.700602 kubelet[2681]: E0213 19:33:17.700471 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:17.702458 containerd[1495]: time="2025-02-13T19:33:17.702411357Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:33:17.768068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46-rootfs.mount: Deactivated successfully. Feb 13 19:33:18.455347 containerd[1495]: time="2025-02-13T19:33:18.455285453Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\"" Feb 13 19:33:18.457223 containerd[1495]: time="2025-02-13T19:33:18.455814888Z" level=info msg="StartContainer for \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\"" Feb 13 19:33:18.457325 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:38046.service - OpenSSH per-connection server daemon (10.0.0.1:38046). Feb 13 19:33:18.489752 systemd[1]: Started cri-containerd-63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165.scope - libcontainer container 63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165. Feb 13 19:33:18.499057 sshd[3150]: Accepted publickey for core from 10.0.0.1 port 38046 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:18.500757 sshd-session[3150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:18.506803 systemd-logind[1471]: New session 10 of user core. Feb 13 19:33:18.512658 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:33:18.522680 containerd[1495]: time="2025-02-13T19:33:18.522635642Z" level=info msg="StartContainer for \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\" returns successfully" Feb 13 19:33:18.537705 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:33:18.538025 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:18.538086 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:18.544906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:18.545135 systemd[1]: cri-containerd-63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165.scope: Deactivated successfully. Feb 13 19:33:18.576027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:18.578645 containerd[1495]: time="2025-02-13T19:33:18.578305373Z" level=info msg="shim disconnected" id=63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165 namespace=k8s.io Feb 13 19:33:18.578645 containerd[1495]: time="2025-02-13T19:33:18.578373655Z" level=warning msg="cleaning up after shim disconnected" id=63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165 namespace=k8s.io Feb 13 19:33:18.578645 containerd[1495]: time="2025-02-13T19:33:18.578386398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:33:18.593173 containerd[1495]: time="2025-02-13T19:33:18.593113784Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:33:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:33:18.648671 sshd[3176]: Connection closed by 10.0.0.1 port 38046 Feb 13 19:33:18.649041 sshd-session[3150]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:18.653721 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:38046.service: Deactivated successfully. Feb 13 19:33:18.655754 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:33:18.656365 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:33:18.657243 systemd-logind[1471]: Removed session 10. Feb 13 19:33:18.704231 kubelet[2681]: E0213 19:33:18.704200 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:18.706278 containerd[1495]: time="2025-02-13T19:33:18.705985422Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:33:18.723396 containerd[1495]: time="2025-02-13T19:33:18.723346750Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\"" Feb 13 19:33:18.723874 containerd[1495]: time="2025-02-13T19:33:18.723834361Z" level=info msg="StartContainer for \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\"" Feb 13 19:33:18.753650 systemd[1]: Started cri-containerd-29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5.scope - libcontainer container 29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5. Feb 13 19:33:18.768541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165-rootfs.mount: Deactivated successfully. Feb 13 19:33:18.785647 containerd[1495]: time="2025-02-13T19:33:18.785597812Z" level=info msg="StartContainer for \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\" returns successfully" Feb 13 19:33:18.786141 systemd[1]: cri-containerd-29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5.scope: Deactivated successfully. Feb 13 19:33:18.806813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5-rootfs.mount: Deactivated successfully. Feb 13 19:33:18.813936 containerd[1495]: time="2025-02-13T19:33:18.813848889Z" level=info msg="shim disconnected" id=29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5 namespace=k8s.io Feb 13 19:33:18.813936 containerd[1495]: time="2025-02-13T19:33:18.813915017Z" level=warning msg="cleaning up after shim disconnected" id=29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5 namespace=k8s.io Feb 13 19:33:18.813936 containerd[1495]: time="2025-02-13T19:33:18.813926177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:33:19.315935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840298198.mount: Deactivated successfully. Feb 13 19:33:19.708253 kubelet[2681]: E0213 19:33:19.708111 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:19.710193 containerd[1495]: time="2025-02-13T19:33:19.710115382Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:33:20.435897 containerd[1495]: time="2025-02-13T19:33:20.435822202Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\"" Feb 13 19:33:20.436353 containerd[1495]: time="2025-02-13T19:33:20.436294552Z" level=info msg="StartContainer for \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\"" Feb 13 19:33:20.468750 systemd[1]: Started cri-containerd-71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a.scope - libcontainer container 71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a. Feb 13 19:33:20.493045 systemd[1]: cri-containerd-71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a.scope: Deactivated successfully. Feb 13 19:33:20.657349 containerd[1495]: time="2025-02-13T19:33:20.657282712Z" level=info msg="StartContainer for \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\" returns successfully" Feb 13 19:33:20.711292 kubelet[2681]: E0213 19:33:20.711181 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:21.132217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a-rootfs.mount: Deactivated successfully. Feb 13 19:33:21.413074 containerd[1495]: time="2025-02-13T19:33:21.412958002Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:21.629146 containerd[1495]: time="2025-02-13T19:33:21.629069495Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:33:21.629838 containerd[1495]: time="2025-02-13T19:33:21.629783519Z" level=info msg="shim disconnected" id=71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a namespace=k8s.io Feb 13 19:33:21.629906 containerd[1495]: time="2025-02-13T19:33:21.629837410Z" level=warning msg="cleaning up after shim disconnected" id=71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a namespace=k8s.io Feb 13 19:33:21.629906 containerd[1495]: time="2025-02-13T19:33:21.629861555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:33:21.827678 containerd[1495]: time="2025-02-13T19:33:21.827607754Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:21.848046 kubelet[2681]: E0213 19:33:21.847461 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:21.849672 containerd[1495]: time="2025-02-13T19:33:21.849627420Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:33:22.219805 containerd[1495]: time="2025-02-13T19:33:22.219615537Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.4740658s" Feb 13 19:33:22.219805 containerd[1495]: time="2025-02-13T19:33:22.219678405Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:33:22.221401 containerd[1495]: time="2025-02-13T19:33:22.221376882Z" level=info msg="CreateContainer within sandbox \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:33:22.788359 containerd[1495]: time="2025-02-13T19:33:22.788272688Z" level=info msg="CreateContainer within sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\"" Feb 13 19:33:22.789103 containerd[1495]: time="2025-02-13T19:33:22.789028270Z" level=info msg="StartContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\"" Feb 13 19:33:22.816765 systemd[1]: Started cri-containerd-5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff.scope - libcontainer container 5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff. Feb 13 19:33:23.007198 containerd[1495]: time="2025-02-13T19:33:23.007135055Z" level=info msg="StartContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" returns successfully" Feb 13 19:33:23.007330 containerd[1495]: time="2025-02-13T19:33:23.007143591Z" level=info msg="CreateContainer within sandbox \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\"" Feb 13 19:33:23.008128 containerd[1495]: time="2025-02-13T19:33:23.008076528Z" level=info msg="StartContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\"" Feb 13 19:33:23.043210 systemd[1]: Started cri-containerd-4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5.scope - libcontainer container 4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5. Feb 13 19:33:23.138782 kubelet[2681]: I0213 19:33:23.138733 2681 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:33:23.201143 containerd[1495]: time="2025-02-13T19:33:23.201042189Z" level=info msg="StartContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" returns successfully" Feb 13 19:33:23.226340 kubelet[2681]: I0213 19:33:23.226272 2681 topology_manager.go:215] "Topology Admit Handler" podUID="2f43e59a-e7b5-4a1b-8cee-4b3d3f567128" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gg7mf" Feb 13 19:33:23.234862 kubelet[2681]: I0213 19:33:23.234800 2681 topology_manager.go:215] "Topology Admit Handler" podUID="be4dc9ab-4835-4663-b7c7-12759ceaf5dd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qlhg6" Feb 13 19:33:23.235401 kubelet[2681]: I0213 19:33:23.235370 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be4dc9ab-4835-4663-b7c7-12759ceaf5dd-config-volume\") pod \"coredns-7db6d8ff4d-qlhg6\" (UID: \"be4dc9ab-4835-4663-b7c7-12759ceaf5dd\") " pod="kube-system/coredns-7db6d8ff4d-qlhg6" Feb 13 19:33:23.235440 kubelet[2681]: I0213 19:33:23.235424 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvrgc\" (UniqueName: \"kubernetes.io/projected/2f43e59a-e7b5-4a1b-8cee-4b3d3f567128-kube-api-access-nvrgc\") pod \"coredns-7db6d8ff4d-gg7mf\" (UID: \"2f43e59a-e7b5-4a1b-8cee-4b3d3f567128\") " pod="kube-system/coredns-7db6d8ff4d-gg7mf" Feb 13 19:33:23.235468 kubelet[2681]: I0213 19:33:23.235456 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f43e59a-e7b5-4a1b-8cee-4b3d3f567128-config-volume\") pod \"coredns-7db6d8ff4d-gg7mf\" (UID: \"2f43e59a-e7b5-4a1b-8cee-4b3d3f567128\") " pod="kube-system/coredns-7db6d8ff4d-gg7mf" Feb 13 19:33:23.235494 kubelet[2681]: I0213 19:33:23.235484 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8km6\" (UniqueName: \"kubernetes.io/projected/be4dc9ab-4835-4663-b7c7-12759ceaf5dd-kube-api-access-c8km6\") pod \"coredns-7db6d8ff4d-qlhg6\" (UID: \"be4dc9ab-4835-4663-b7c7-12759ceaf5dd\") " pod="kube-system/coredns-7db6d8ff4d-qlhg6" Feb 13 19:33:23.242356 systemd[1]: Created slice kubepods-burstable-pod2f43e59a_e7b5_4a1b_8cee_4b3d3f567128.slice - libcontainer container kubepods-burstable-pod2f43e59a_e7b5_4a1b_8cee_4b3d3f567128.slice. Feb 13 19:33:23.255591 systemd[1]: Created slice kubepods-burstable-podbe4dc9ab_4835_4663_b7c7_12759ceaf5dd.slice - libcontainer container kubepods-burstable-podbe4dc9ab_4835_4663_b7c7_12759ceaf5dd.slice. Feb 13 19:33:23.548755 kubelet[2681]: E0213 19:33:23.548698 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:23.550188 containerd[1495]: time="2025-02-13T19:33:23.550140483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gg7mf,Uid:2f43e59a-e7b5-4a1b-8cee-4b3d3f567128,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:23.560768 kubelet[2681]: E0213 19:33:23.560712 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:23.562416 containerd[1495]: time="2025-02-13T19:33:23.562369321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlhg6,Uid:be4dc9ab-4835-4663-b7c7-12759ceaf5dd,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:23.677740 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:38058.service - OpenSSH per-connection server daemon (10.0.0.1:38058). Feb 13 19:33:23.756102 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 38058 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:23.758702 sshd-session[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:23.763569 systemd-logind[1471]: New session 11 of user core. Feb 13 19:33:23.771728 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:33:23.861634 kubelet[2681]: E0213 19:33:23.861054 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:23.865772 kubelet[2681]: E0213 19:33:23.865748 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:24.016932 sshd[3542]: Connection closed by 10.0.0.1 port 38058 Feb 13 19:33:24.017535 kubelet[2681]: I0213 19:33:24.017087 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zk74w" podStartSLOduration=7.813201327 podStartE2EDuration="20.017065134s" podCreationTimestamp="2025-02-13 19:33:04 +0000 UTC" firstStartedPulling="2025-02-13 19:33:04.541470486 +0000 UTC m=+14.507691932" lastFinishedPulling="2025-02-13 19:33:16.745334293 +0000 UTC m=+26.711555739" observedRunningTime="2025-02-13 19:33:24.016655913 +0000 UTC m=+33.982877359" watchObservedRunningTime="2025-02-13 19:33:24.017065134 +0000 UTC m=+33.983286580" Feb 13 19:33:24.019625 sshd-session[3533]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:24.023060 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:33:24.024072 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:38058.service: Deactivated successfully. Feb 13 19:33:24.027178 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:33:24.029924 systemd-logind[1471]: Removed session 11. Feb 13 19:33:24.867988 kubelet[2681]: E0213 19:33:24.867939 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:24.868470 kubelet[2681]: E0213 19:33:24.868198 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:25.870261 kubelet[2681]: E0213 19:33:25.870206 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:27.322891 systemd-networkd[1407]: cilium_host: Link UP Feb 13 19:33:27.323117 systemd-networkd[1407]: cilium_net: Link UP Feb 13 19:33:27.323361 systemd-networkd[1407]: cilium_net: Gained carrier Feb 13 19:33:27.324240 systemd-networkd[1407]: cilium_host: Gained carrier Feb 13 19:33:27.426592 systemd-networkd[1407]: cilium_vxlan: Link UP Feb 13 19:33:27.426602 systemd-networkd[1407]: cilium_vxlan: Gained carrier Feb 13 19:33:27.633206 systemd-networkd[1407]: cilium_host: Gained IPv6LL Feb 13 19:33:27.659542 kernel: NET: Registered PF_ALG protocol family Feb 13 19:33:27.727644 systemd-networkd[1407]: cilium_net: Gained IPv6LL Feb 13 19:33:28.352208 systemd-networkd[1407]: lxc_health: Link UP Feb 13 19:33:28.372656 systemd-networkd[1407]: lxc_health: Gained carrier Feb 13 19:33:28.440023 kubelet[2681]: E0213 19:33:28.439977 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:28.523524 kubelet[2681]: I0213 19:33:28.523423 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-vxrzf" podStartSLOduration=7.26807126 podStartE2EDuration="24.523404895s" podCreationTimestamp="2025-02-13 19:33:04 +0000 UTC" firstStartedPulling="2025-02-13 19:33:04.96486059 +0000 UTC m=+14.931082037" lastFinishedPulling="2025-02-13 19:33:22.220194226 +0000 UTC m=+32.186415672" observedRunningTime="2025-02-13 19:33:24.060742588 +0000 UTC m=+34.026964034" watchObservedRunningTime="2025-02-13 19:33:28.523404895 +0000 UTC m=+38.489626341" Feb 13 19:33:28.551722 systemd-networkd[1407]: cilium_vxlan: Gained IPv6LL Feb 13 19:33:28.712617 systemd-networkd[1407]: lxcc54e196b8c27: Link UP Feb 13 19:33:28.720534 kernel: eth0: renamed from tmpfaa13 Feb 13 19:33:28.727152 systemd-networkd[1407]: lxcc54e196b8c27: Gained carrier Feb 13 19:33:28.740946 systemd-networkd[1407]: lxc9cb808518649: Link UP Feb 13 19:33:28.762550 kernel: eth0: renamed from tmp2c770 Feb 13 19:33:28.770344 systemd-networkd[1407]: lxc9cb808518649: Gained carrier Feb 13 19:33:28.915444 kubelet[2681]: E0213 19:33:28.915415 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:29.035042 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:35706.service - OpenSSH per-connection server daemon (10.0.0.1:35706). Feb 13 19:33:29.080725 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 35706 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:29.083003 sshd-session[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:29.087412 systemd-logind[1471]: New session 12 of user core. Feb 13 19:33:29.095629 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:33:29.232622 sshd[3927]: Connection closed by 10.0.0.1 port 35706 Feb 13 19:33:29.233030 sshd-session[3925]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:29.239003 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:33:29.239350 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:35706.service: Deactivated successfully. Feb 13 19:33:29.241996 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:33:29.243914 systemd-logind[1471]: Removed session 12. Feb 13 19:33:29.447707 systemd-networkd[1407]: lxc_health: Gained IPv6LL Feb 13 19:33:30.151664 systemd-networkd[1407]: lxc9cb808518649: Gained IPv6LL Feb 13 19:33:30.599738 systemd-networkd[1407]: lxcc54e196b8c27: Gained IPv6LL Feb 13 19:33:32.671571 containerd[1495]: time="2025-02-13T19:33:32.671445368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:32.671571 containerd[1495]: time="2025-02-13T19:33:32.671539114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:32.671571 containerd[1495]: time="2025-02-13T19:33:32.671607683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.671571 containerd[1495]: time="2025-02-13T19:33:32.671732808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.701697 systemd[1]: Started cri-containerd-faa13d9c28389d4f26b02b46b652cd7941e04f4753b86c667c903f4864b67640.scope - libcontainer container faa13d9c28389d4f26b02b46b652cd7941e04f4753b86c667c903f4864b67640. Feb 13 19:33:32.708184 containerd[1495]: time="2025-02-13T19:33:32.708071403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:32.708184 containerd[1495]: time="2025-02-13T19:33:32.708127288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:32.708184 containerd[1495]: time="2025-02-13T19:33:32.708141514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.708464 containerd[1495]: time="2025-02-13T19:33:32.708232336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.722358 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:33:32.731833 systemd[1]: Started cri-containerd-2c770a38488acff10f542606b7f665757517ce489639565397be26e686799c6d.scope - libcontainer container 2c770a38488acff10f542606b7f665757517ce489639565397be26e686799c6d. Feb 13 19:33:32.748073 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:33:32.751474 containerd[1495]: time="2025-02-13T19:33:32.751435920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlhg6,Uid:be4dc9ab-4835-4663-b7c7-12759ceaf5dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa13d9c28389d4f26b02b46b652cd7941e04f4753b86c667c903f4864b67640\"" Feb 13 19:33:32.752459 kubelet[2681]: E0213 19:33:32.752422 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.754925 containerd[1495]: time="2025-02-13T19:33:32.754852226Z" level=info msg="CreateContainer within sandbox \"faa13d9c28389d4f26b02b46b652cd7941e04f4753b86c667c903f4864b67640\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:33:32.773314 containerd[1495]: time="2025-02-13T19:33:32.773270305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gg7mf,Uid:2f43e59a-e7b5-4a1b-8cee-4b3d3f567128,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c770a38488acff10f542606b7f665757517ce489639565397be26e686799c6d\"" Feb 13 19:33:32.774834 kubelet[2681]: E0213 19:33:32.774797 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.776826 containerd[1495]: time="2025-02-13T19:33:32.776788303Z" level=info msg="CreateContainer within sandbox \"2c770a38488acff10f542606b7f665757517ce489639565397be26e686799c6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:33:33.249461 containerd[1495]: time="2025-02-13T19:33:33.249369408Z" level=info msg="CreateContainer within sandbox \"faa13d9c28389d4f26b02b46b652cd7941e04f4753b86c667c903f4864b67640\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1af608cc0e1115c1ad72fd4dc9f2d543cd6876772c06ff01c54db3bba1803086\"" Feb 13 19:33:33.250049 containerd[1495]: time="2025-02-13T19:33:33.250008780Z" level=info msg="StartContainer for \"1af608cc0e1115c1ad72fd4dc9f2d543cd6876772c06ff01c54db3bba1803086\"" Feb 13 19:33:33.284654 systemd[1]: Started cri-containerd-1af608cc0e1115c1ad72fd4dc9f2d543cd6876772c06ff01c54db3bba1803086.scope - libcontainer container 1af608cc0e1115c1ad72fd4dc9f2d543cd6876772c06ff01c54db3bba1803086. Feb 13 19:33:33.320639 containerd[1495]: time="2025-02-13T19:33:33.320465522Z" level=info msg="CreateContainer within sandbox \"2c770a38488acff10f542606b7f665757517ce489639565397be26e686799c6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5faabfbab086096d0a895b20fd27fcf63018457e7fbe551f6f306036002a5e0\"" Feb 13 19:33:33.321319 containerd[1495]: time="2025-02-13T19:33:33.321264415Z" level=info msg="StartContainer for \"e5faabfbab086096d0a895b20fd27fcf63018457e7fbe551f6f306036002a5e0\"" Feb 13 19:33:33.350795 systemd[1]: Started cri-containerd-e5faabfbab086096d0a895b20fd27fcf63018457e7fbe551f6f306036002a5e0.scope - libcontainer container e5faabfbab086096d0a895b20fd27fcf63018457e7fbe551f6f306036002a5e0. Feb 13 19:33:33.422405 containerd[1495]: time="2025-02-13T19:33:33.422327215Z" level=info msg="StartContainer for \"1af608cc0e1115c1ad72fd4dc9f2d543cd6876772c06ff01c54db3bba1803086\" returns successfully" Feb 13 19:33:33.422597 containerd[1495]: time="2025-02-13T19:33:33.422327446Z" level=info msg="StartContainer for \"e5faabfbab086096d0a895b20fd27fcf63018457e7fbe551f6f306036002a5e0\" returns successfully" Feb 13 19:33:33.677605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696451748.mount: Deactivated successfully. Feb 13 19:33:33.929710 kubelet[2681]: E0213 19:33:33.928030 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:33.930878 kubelet[2681]: E0213 19:33:33.930839 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:34.000583 kubelet[2681]: I0213 19:33:34.000084 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gg7mf" podStartSLOduration=30.00006596 podStartE2EDuration="30.00006596s" podCreationTimestamp="2025-02-13 19:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:34.000047235 +0000 UTC m=+43.966268681" watchObservedRunningTime="2025-02-13 19:33:34.00006596 +0000 UTC m=+43.966287406" Feb 13 19:33:34.251071 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:35716.service - OpenSSH per-connection server daemon (10.0.0.1:35716). Feb 13 19:33:34.291420 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 35716 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:34.292960 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:34.297195 systemd-logind[1471]: New session 13 of user core. Feb 13 19:33:34.307621 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:33:34.439879 sshd[4122]: Connection closed by 10.0.0.1 port 35716 Feb 13 19:33:34.440246 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:34.445279 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:35716.service: Deactivated successfully. Feb 13 19:33:34.447720 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:33:34.448421 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:33:34.449399 systemd-logind[1471]: Removed session 13. Feb 13 19:33:34.933429 kubelet[2681]: E0213 19:33:34.933387 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:34.933896 kubelet[2681]: E0213 19:33:34.933395 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:35.935614 kubelet[2681]: E0213 19:33:35.935581 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:35.936058 kubelet[2681]: E0213 19:33:35.935748 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:39.451549 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:44556.service - OpenSSH per-connection server daemon (10.0.0.1:44556). Feb 13 19:33:39.495519 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 44556 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:39.497382 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:39.501927 systemd-logind[1471]: New session 14 of user core. Feb 13 19:33:39.509679 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:33:39.648823 sshd[4144]: Connection closed by 10.0.0.1 port 44556 Feb 13 19:33:39.649215 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:39.653871 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:44556.service: Deactivated successfully. Feb 13 19:33:39.656397 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:33:39.657161 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:33:39.658148 systemd-logind[1471]: Removed session 14. Feb 13 19:33:44.669037 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Feb 13 19:33:44.709944 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:44.711755 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:44.715962 systemd-logind[1471]: New session 15 of user core. Feb 13 19:33:44.726652 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:33:44.848320 sshd[4160]: Connection closed by 10.0.0.1 port 56260 Feb 13 19:33:44.848762 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:44.860998 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:56260.service: Deactivated successfully. Feb 13 19:33:44.862992 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:33:44.865131 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:33:44.872783 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:56272.service - OpenSSH per-connection server daemon (10.0.0.1:56272). Feb 13 19:33:44.873831 systemd-logind[1471]: Removed session 15. Feb 13 19:33:44.907062 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 56272 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:44.908732 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:44.912895 systemd-logind[1471]: New session 16 of user core. Feb 13 19:33:44.928737 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:33:45.152413 sshd[4175]: Connection closed by 10.0.0.1 port 56272 Feb 13 19:33:45.154115 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:45.165427 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:56272.service: Deactivated successfully. Feb 13 19:33:45.167758 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:33:45.170052 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:33:45.175131 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:56278.service - OpenSSH per-connection server daemon (10.0.0.1:56278). Feb 13 19:33:45.178034 systemd-logind[1471]: Removed session 16. Feb 13 19:33:45.217296 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 56278 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:45.219133 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:45.224482 systemd-logind[1471]: New session 17 of user core. Feb 13 19:33:45.233677 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:33:45.370562 sshd[4188]: Connection closed by 10.0.0.1 port 56278 Feb 13 19:33:45.370970 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:45.375207 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:56278.service: Deactivated successfully. Feb 13 19:33:45.377447 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:33:45.378263 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:33:45.379301 systemd-logind[1471]: Removed session 17. Feb 13 19:33:50.383355 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:56288.service - OpenSSH per-connection server daemon (10.0.0.1:56288). Feb 13 19:33:50.424797 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 56288 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:50.426267 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:50.430910 systemd-logind[1471]: New session 18 of user core. Feb 13 19:33:50.440677 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:33:50.577390 sshd[4205]: Connection closed by 10.0.0.1 port 56288 Feb 13 19:33:50.577794 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:50.582585 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:56288.service: Deactivated successfully. Feb 13 19:33:50.584684 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:33:50.585573 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:33:50.586686 systemd-logind[1471]: Removed session 18. Feb 13 19:33:55.590728 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Feb 13 19:33:55.626140 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:55.627659 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:55.631758 systemd-logind[1471]: New session 19 of user core. Feb 13 19:33:55.642601 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:33:55.761086 sshd[4220]: Connection closed by 10.0.0.1 port 45300 Feb 13 19:33:55.761616 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:55.778594 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:45300.service: Deactivated successfully. Feb 13 19:33:55.781163 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:33:55.782860 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:33:55.795940 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:45310.service - OpenSSH per-connection server daemon (10.0.0.1:45310). Feb 13 19:33:55.797005 systemd-logind[1471]: Removed session 19. Feb 13 19:33:55.830670 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 45310 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:55.832301 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:55.836950 systemd-logind[1471]: New session 20 of user core. Feb 13 19:33:55.850733 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:33:56.119523 sshd[4235]: Connection closed by 10.0.0.1 port 45310 Feb 13 19:33:56.119796 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:56.129566 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:45310.service: Deactivated successfully. Feb 13 19:33:56.131453 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:33:56.132816 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:33:56.138981 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:45322.service - OpenSSH per-connection server daemon (10.0.0.1:45322). Feb 13 19:33:56.140347 systemd-logind[1471]: Removed session 20. Feb 13 19:33:56.175174 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 45322 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:56.176927 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:56.181377 systemd-logind[1471]: New session 21 of user core. Feb 13 19:33:56.191673 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:33:57.971487 sshd[4247]: Connection closed by 10.0.0.1 port 45322 Feb 13 19:33:57.972773 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:57.982235 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:45322.service: Deactivated successfully. Feb 13 19:33:57.984628 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:33:57.987260 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:33:57.994895 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:45330.service - OpenSSH per-connection server daemon (10.0.0.1:45330). Feb 13 19:33:57.996360 systemd-logind[1471]: Removed session 21. Feb 13 19:33:58.032129 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 45330 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:58.033666 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:58.037729 systemd-logind[1471]: New session 22 of user core. Feb 13 19:33:58.044624 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:33:58.534455 sshd[4272]: Connection closed by 10.0.0.1 port 45330 Feb 13 19:33:58.534884 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:58.544058 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:45330.service: Deactivated successfully. Feb 13 19:33:58.546495 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:33:58.548758 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:33:58.554003 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Feb 13 19:33:58.555454 systemd-logind[1471]: Removed session 22. Feb 13 19:33:58.594400 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:58.596710 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:58.603195 systemd-logind[1471]: New session 23 of user core. Feb 13 19:33:58.612855 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:33:58.738345 sshd[4285]: Connection closed by 10.0.0.1 port 45342 Feb 13 19:33:58.738719 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:58.743217 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:45342.service: Deactivated successfully. Feb 13 19:33:58.745356 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:33:58.746313 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:33:58.747256 systemd-logind[1471]: Removed session 23. Feb 13 19:34:03.751598 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:45352.service - OpenSSH per-connection server daemon (10.0.0.1:45352). Feb 13 19:34:03.789128 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 45352 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:03.790955 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:03.795575 systemd-logind[1471]: New session 24 of user core. Feb 13 19:34:03.804705 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:34:04.142588 sshd[4301]: Connection closed by 10.0.0.1 port 45352 Feb 13 19:34:04.143452 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:04.147910 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:45352.service: Deactivated successfully. Feb 13 19:34:04.150041 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:34:04.150618 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:34:04.151659 systemd-logind[1471]: Removed session 24. Feb 13 19:34:08.137272 kubelet[2681]: E0213 19:34:08.137209 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.943158 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:41038.service - OpenSSH per-connection server daemon (10.0.0.1:41038). Feb 13 19:34:08.981844 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 41038 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:08.983740 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:08.988387 systemd-logind[1471]: New session 25 of user core. Feb 13 19:34:08.997708 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:34:09.102028 sshd[4320]: Connection closed by 10.0.0.1 port 41038 Feb 13 19:34:09.102435 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:09.106091 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:41038.service: Deactivated successfully. Feb 13 19:34:09.107959 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:34:09.108540 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:34:09.109417 systemd-logind[1471]: Removed session 25. Feb 13 19:34:12.138344 kubelet[2681]: E0213 19:34:12.138145 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:14.115254 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Feb 13 19:34:14.156134 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:14.157915 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:14.162308 systemd-logind[1471]: New session 26 of user core. Feb 13 19:34:14.175658 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:34:14.293204 sshd[4335]: Connection closed by 10.0.0.1 port 41040 Feb 13 19:34:14.293625 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:14.297466 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:41040.service: Deactivated successfully. Feb 13 19:34:14.299359 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:34:14.300140 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:34:14.301128 systemd-logind[1471]: Removed session 26. Feb 13 19:34:17.138093 kubelet[2681]: E0213 19:34:17.138029 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.305480 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:44022.service - OpenSSH per-connection server daemon (10.0.0.1:44022). Feb 13 19:34:19.342110 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 44022 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:19.343422 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:19.347322 systemd-logind[1471]: New session 27 of user core. Feb 13 19:34:19.360792 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:34:19.494682 sshd[4349]: Connection closed by 10.0.0.1 port 44022 Feb 13 19:34:19.495069 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:19.500007 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:44022.service: Deactivated successfully. Feb 13 19:34:19.502768 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:34:19.503633 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:34:19.516723 systemd-logind[1471]: Removed session 27. Feb 13 19:34:19.517983 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:44032.service - OpenSSH per-connection server daemon (10.0.0.1:44032). Feb 13 19:34:19.554527 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 44032 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:19.556008 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:19.560683 systemd-logind[1471]: New session 28 of user core. Feb 13 19:34:19.571723 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:34:21.352109 kubelet[2681]: I0213 19:34:21.352045 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qlhg6" podStartSLOduration=77.352025563 podStartE2EDuration="1m17.352025563s" podCreationTimestamp="2025-02-13 19:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:34.100626923 +0000 UTC m=+44.066848379" watchObservedRunningTime="2025-02-13 19:34:21.352025563 +0000 UTC m=+91.318247009" Feb 13 19:34:21.364917 containerd[1495]: time="2025-02-13T19:34:21.364871027Z" level=info msg="StopContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" with timeout 30 (s)" Feb 13 19:34:21.373444 containerd[1495]: time="2025-02-13T19:34:21.373402121Z" level=info msg="Stop container \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" with signal terminated" Feb 13 19:34:21.389386 systemd[1]: cri-containerd-4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5.scope: Deactivated successfully. Feb 13 19:34:21.398317 containerd[1495]: time="2025-02-13T19:34:21.398253761Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:34:21.398994 containerd[1495]: time="2025-02-13T19:34:21.398898122Z" level=info msg="StopContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" with timeout 2 (s)" Feb 13 19:34:21.399152 containerd[1495]: time="2025-02-13T19:34:21.399123785Z" level=info msg="Stop container \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" with signal terminated" Feb 13 19:34:21.407107 systemd-networkd[1407]: lxc_health: Link DOWN Feb 13 19:34:21.407116 systemd-networkd[1407]: lxc_health: Lost carrier Feb 13 19:34:21.416078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5-rootfs.mount: Deactivated successfully. Feb 13 19:34:21.430419 containerd[1495]: time="2025-02-13T19:34:21.430319214Z" level=info msg="shim disconnected" id=4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5 namespace=k8s.io Feb 13 19:34:21.430419 containerd[1495]: time="2025-02-13T19:34:21.430374428Z" level=warning msg="cleaning up after shim disconnected" id=4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5 namespace=k8s.io Feb 13 19:34:21.430419 containerd[1495]: time="2025-02-13T19:34:21.430383254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:21.432513 systemd[1]: cri-containerd-5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff.scope: Deactivated successfully. Feb 13 19:34:21.432838 systemd[1]: cri-containerd-5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff.scope: Consumed 7.369s CPU time. Feb 13 19:34:21.451045 containerd[1495]: time="2025-02-13T19:34:21.450984847Z" level=info msg="StopContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" returns successfully" Feb 13 19:34:21.456416 containerd[1495]: time="2025-02-13T19:34:21.456363644Z" level=info msg="StopPodSandbox for \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\"" Feb 13 19:34:21.458591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff-rootfs.mount: Deactivated successfully. Feb 13 19:34:21.462005 containerd[1495]: time="2025-02-13T19:34:21.456425790Z" level=info msg="Container to stop \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.464586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09-shm.mount: Deactivated successfully. Feb 13 19:34:21.466812 containerd[1495]: time="2025-02-13T19:34:21.466730284Z" level=info msg="shim disconnected" id=5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff namespace=k8s.io Feb 13 19:34:21.466812 containerd[1495]: time="2025-02-13T19:34:21.466804804Z" level=warning msg="cleaning up after shim disconnected" id=5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff namespace=k8s.io Feb 13 19:34:21.466926 containerd[1495]: time="2025-02-13T19:34:21.466816566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:21.469928 systemd[1]: cri-containerd-e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09.scope: Deactivated successfully. Feb 13 19:34:21.488930 containerd[1495]: time="2025-02-13T19:34:21.488880754Z" level=info msg="StopContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" returns successfully" Feb 13 19:34:21.489739 containerd[1495]: time="2025-02-13T19:34:21.489700553Z" level=info msg="StopPodSandbox for \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\"" Feb 13 19:34:21.490351 containerd[1495]: time="2025-02-13T19:34:21.489902131Z" level=info msg="Container to stop \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.490351 containerd[1495]: time="2025-02-13T19:34:21.489943258Z" level=info msg="Container to stop \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.490351 containerd[1495]: time="2025-02-13T19:34:21.489952295Z" level=info msg="Container to stop \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.490351 containerd[1495]: time="2025-02-13T19:34:21.489961442Z" level=info msg="Container to stop \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.490351 containerd[1495]: time="2025-02-13T19:34:21.489970209Z" level=info msg="Container to stop \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:34:21.492465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0-shm.mount: Deactivated successfully. Feb 13 19:34:21.498259 systemd[1]: cri-containerd-eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0.scope: Deactivated successfully. Feb 13 19:34:21.500370 containerd[1495]: time="2025-02-13T19:34:21.500314407Z" level=info msg="shim disconnected" id=e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09 namespace=k8s.io Feb 13 19:34:21.500587 containerd[1495]: time="2025-02-13T19:34:21.500555961Z" level=warning msg="cleaning up after shim disconnected" id=e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09 namespace=k8s.io Feb 13 19:34:21.500587 containerd[1495]: time="2025-02-13T19:34:21.500573093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:21.519572 containerd[1495]: time="2025-02-13T19:34:21.519470206Z" level=info msg="TearDown network for sandbox \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\" successfully" Feb 13 19:34:21.519572 containerd[1495]: time="2025-02-13T19:34:21.519530729Z" level=info msg="StopPodSandbox for \"e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09\" returns successfully" Feb 13 19:34:21.526521 containerd[1495]: time="2025-02-13T19:34:21.526443105Z" level=info msg="shim disconnected" id=eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0 namespace=k8s.io Feb 13 19:34:21.526521 containerd[1495]: time="2025-02-13T19:34:21.526496115Z" level=warning msg="cleaning up after shim disconnected" id=eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0 namespace=k8s.io Feb 13 19:34:21.526521 containerd[1495]: time="2025-02-13T19:34:21.526530750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:21.545480 containerd[1495]: time="2025-02-13T19:34:21.545429716Z" level=info msg="TearDown network for sandbox \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" successfully" Feb 13 19:34:21.545480 containerd[1495]: time="2025-02-13T19:34:21.545465333Z" level=info msg="StopPodSandbox for \"eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0\" returns successfully" Feb 13 19:34:21.573717 kubelet[2681]: I0213 19:34:21.573649 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-cgroup\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.573928 kubelet[2681]: I0213 19:34:21.573730 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.674589 kubelet[2681]: I0213 19:34:21.674363 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hostproc\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674589 kubelet[2681]: I0213 19:34:21.674417 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cni-path\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674589 kubelet[2681]: I0213 19:34:21.674434 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-net\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674589 kubelet[2681]: I0213 19:34:21.674546 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.674589 kubelet[2681]: I0213 19:34:21.674542 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674575 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674596 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7vtr\" (UniqueName: \"kubernetes.io/projected/9716826f-40e7-4295-93c9-ec67bbb84691-kube-api-access-c7vtr\") pod \"9716826f-40e7-4295-93c9-ec67bbb84691\" (UID: \"9716826f-40e7-4295-93c9-ec67bbb84691\") " Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674612 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-bpf-maps\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674625 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-xtables-lock\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674641 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-config-path\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.674880 kubelet[2681]: I0213 19:34:21.674656 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-clustermesh-secrets\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674671 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hubble-tls\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674687 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29qx\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-kube-api-access-h29qx\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674702 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-lib-modules\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674716 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-etc-cni-netd\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674731 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-kernel\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675091 kubelet[2681]: I0213 19:34:21.674747 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9716826f-40e7-4295-93c9-ec67bbb84691-cilium-config-path\") pod \"9716826f-40e7-4295-93c9-ec67bbb84691\" (UID: \"9716826f-40e7-4295-93c9-ec67bbb84691\") " Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674765 2681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-run\") pod \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\" (UID: \"0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6\") " Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674804 2681 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674815 2681 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674824 2681 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674836 2681 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674682 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.675320 kubelet[2681]: I0213 19:34:21.674856 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.675575 kubelet[2681]: I0213 19:34:21.674867 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.678861 kubelet[2681]: I0213 19:34:21.678661 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:34:21.678861 kubelet[2681]: I0213 19:34:21.678727 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:34:21.678861 kubelet[2681]: I0213 19:34:21.678767 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.678861 kubelet[2681]: I0213 19:34:21.678783 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.678861 kubelet[2681]: I0213 19:34:21.678809 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:34:21.679073 kubelet[2681]: I0213 19:34:21.678803 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:34:21.679260 kubelet[2681]: I0213 19:34:21.679229 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9716826f-40e7-4295-93c9-ec67bbb84691-kube-api-access-c7vtr" (OuterVolumeSpecName: "kube-api-access-c7vtr") pod "9716826f-40e7-4295-93c9-ec67bbb84691" (UID: "9716826f-40e7-4295-93c9-ec67bbb84691"). InnerVolumeSpecName "kube-api-access-c7vtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:34:21.681161 kubelet[2681]: I0213 19:34:21.681114 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-kube-api-access-h29qx" (OuterVolumeSpecName: "kube-api-access-h29qx") pod "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" (UID: "0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6"). InnerVolumeSpecName "kube-api-access-h29qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:34:21.682374 kubelet[2681]: I0213 19:34:21.682343 2681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716826f-40e7-4295-93c9-ec67bbb84691-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9716826f-40e7-4295-93c9-ec67bbb84691" (UID: "9716826f-40e7-4295-93c9-ec67bbb84691"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:34:21.775143 kubelet[2681]: I0213 19:34:21.775078 2681 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775143 kubelet[2681]: I0213 19:34:21.775126 2681 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775143 kubelet[2681]: I0213 19:34:21.775139 2681 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c7vtr\" (UniqueName: \"kubernetes.io/projected/9716826f-40e7-4295-93c9-ec67bbb84691-kube-api-access-c7vtr\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775143 kubelet[2681]: I0213 19:34:21.775151 2681 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775143 kubelet[2681]: I0213 19:34:21.775161 2681 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775172 2681 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h29qx\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-kube-api-access-h29qx\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775182 2681 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775192 2681 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775201 2681 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775210 2681 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775221 2681 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9716826f-40e7-4295-93c9-ec67bbb84691-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:21.775425 kubelet[2681]: I0213 19:34:21.775229 2681 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:34:22.033697 kubelet[2681]: I0213 19:34:22.033660 2681 scope.go:117] "RemoveContainer" containerID="4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5" Feb 13 19:34:22.040714 containerd[1495]: time="2025-02-13T19:34:22.040662893Z" level=info msg="RemoveContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\"" Feb 13 19:34:22.041054 systemd[1]: Removed slice kubepods-besteffort-pod9716826f_40e7_4295_93c9_ec67bbb84691.slice - libcontainer container kubepods-besteffort-pod9716826f_40e7_4295_93c9_ec67bbb84691.slice. Feb 13 19:34:22.043873 systemd[1]: Removed slice kubepods-burstable-pod0a7d6744_5e0d_4db8_8323_bf47bcf1c7d6.slice - libcontainer container kubepods-burstable-pod0a7d6744_5e0d_4db8_8323_bf47bcf1c7d6.slice. Feb 13 19:34:22.043953 systemd[1]: kubepods-burstable-pod0a7d6744_5e0d_4db8_8323_bf47bcf1c7d6.slice: Consumed 7.473s CPU time. Feb 13 19:34:22.161645 containerd[1495]: time="2025-02-13T19:34:22.161581015Z" level=info msg="RemoveContainer for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" returns successfully" Feb 13 19:34:22.161962 kubelet[2681]: I0213 19:34:22.161922 2681 scope.go:117] "RemoveContainer" containerID="4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5" Feb 13 19:34:22.162249 containerd[1495]: time="2025-02-13T19:34:22.162201139Z" level=error msg="ContainerStatus for \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\": not found" Feb 13 19:34:22.170658 kubelet[2681]: E0213 19:34:22.170589 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\": not found" containerID="4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5" Feb 13 19:34:22.170801 kubelet[2681]: I0213 19:34:22.170639 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5"} err="failed to get container status \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e04115127dfb8f28bfcd482d93002ef65f2cb782ce6f7b63556b051c622fcc5\": not found" Feb 13 19:34:22.170801 kubelet[2681]: I0213 19:34:22.170744 2681 scope.go:117] "RemoveContainer" containerID="5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff" Feb 13 19:34:22.172341 containerd[1495]: time="2025-02-13T19:34:22.172284997Z" level=info msg="RemoveContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\"" Feb 13 19:34:22.261345 containerd[1495]: time="2025-02-13T19:34:22.261277769Z" level=info msg="RemoveContainer for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" returns successfully" Feb 13 19:34:22.261644 kubelet[2681]: I0213 19:34:22.261602 2681 scope.go:117] "RemoveContainer" containerID="71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a" Feb 13 19:34:22.262685 containerd[1495]: time="2025-02-13T19:34:22.262646529Z" level=info msg="RemoveContainer for \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\"" Feb 13 19:34:22.292549 containerd[1495]: time="2025-02-13T19:34:22.292389220Z" level=info msg="RemoveContainer for \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\" returns successfully" Feb 13 19:34:22.292721 kubelet[2681]: I0213 19:34:22.292671 2681 scope.go:117] "RemoveContainer" containerID="29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5" Feb 13 19:34:22.293885 containerd[1495]: time="2025-02-13T19:34:22.293631191Z" level=info msg="RemoveContainer for \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\"" Feb 13 19:34:22.371411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e806c94a4e87098313f711992c27d40551832ba829c9f0fae533d4f58e385e09-rootfs.mount: Deactivated successfully. Feb 13 19:34:22.371557 systemd[1]: var-lib-kubelet-pods-9716826f\x2d40e7\x2d4295\x2d93c9\x2dec67bbb84691-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc7vtr.mount: Deactivated successfully. Feb 13 19:34:22.371640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb71480d98a680c1f2f9ad02624d0d30d57d7a40aa25aa766e4b52f7cb376bb0-rootfs.mount: Deactivated successfully. Feb 13 19:34:22.371739 systemd[1]: var-lib-kubelet-pods-0a7d6744\x2d5e0d\x2d4db8\x2d8323\x2dbf47bcf1c7d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh29qx.mount: Deactivated successfully. Feb 13 19:34:22.371837 systemd[1]: var-lib-kubelet-pods-0a7d6744\x2d5e0d\x2d4db8\x2d8323\x2dbf47bcf1c7d6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:34:22.371931 systemd[1]: var-lib-kubelet-pods-0a7d6744\x2d5e0d\x2d4db8\x2d8323\x2dbf47bcf1c7d6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:34:22.411350 containerd[1495]: time="2025-02-13T19:34:22.411300143Z" level=info msg="RemoveContainer for \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\" returns successfully" Feb 13 19:34:22.411786 kubelet[2681]: I0213 19:34:22.411603 2681 scope.go:117] "RemoveContainer" containerID="63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165" Feb 13 19:34:22.412580 containerd[1495]: time="2025-02-13T19:34:22.412550320Z" level=info msg="RemoveContainer for \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\"" Feb 13 19:34:22.511804 containerd[1495]: time="2025-02-13T19:34:22.511734674Z" level=info msg="RemoveContainer for \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\" returns successfully" Feb 13 19:34:22.512092 kubelet[2681]: I0213 19:34:22.512042 2681 scope.go:117] "RemoveContainer" containerID="6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46" Feb 13 19:34:22.513279 containerd[1495]: time="2025-02-13T19:34:22.513217056Z" level=info msg="RemoveContainer for \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\"" Feb 13 19:34:22.632980 containerd[1495]: time="2025-02-13T19:34:22.632818927Z" level=info msg="RemoveContainer for \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\" returns successfully" Feb 13 19:34:22.633157 kubelet[2681]: I0213 19:34:22.633094 2681 scope.go:117] "RemoveContainer" containerID="5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff" Feb 13 19:34:22.633865 containerd[1495]: time="2025-02-13T19:34:22.633803976Z" level=error msg="ContainerStatus for \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\": not found" Feb 13 19:34:22.634175 kubelet[2681]: E0213 19:34:22.634106 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\": not found" containerID="5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff" Feb 13 19:34:22.634175 kubelet[2681]: I0213 19:34:22.634162 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff"} err="failed to get container status \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\": rpc error: code = NotFound desc = an error occurred when try to find container \"5146d7ac564acbef84ce035fc8c6d68509a441e7f72428844e89dfd06f470eff\": not found" Feb 13 19:34:22.634373 kubelet[2681]: I0213 19:34:22.634198 2681 scope.go:117] "RemoveContainer" containerID="71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a" Feb 13 19:34:22.634671 containerd[1495]: time="2025-02-13T19:34:22.634587447Z" level=error msg="ContainerStatus for \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\": not found" Feb 13 19:34:22.634836 kubelet[2681]: E0213 19:34:22.634789 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\": not found" containerID="71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a" Feb 13 19:34:22.634836 kubelet[2681]: I0213 19:34:22.634814 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a"} err="failed to get container status \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\": rpc error: code = NotFound desc = an error occurred when try to find container \"71f82a3163cc197cee2bef34b7909703de9f17404e1eb3a5f95e28081a2c944a\": not found" Feb 13 19:34:22.634836 kubelet[2681]: I0213 19:34:22.634829 2681 scope.go:117] "RemoveContainer" containerID="29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5" Feb 13 19:34:22.635108 containerd[1495]: time="2025-02-13T19:34:22.635064152Z" level=error msg="ContainerStatus for \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\": not found" Feb 13 19:34:22.635229 kubelet[2681]: E0213 19:34:22.635200 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\": not found" containerID="29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5" Feb 13 19:34:22.635269 kubelet[2681]: I0213 19:34:22.635226 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5"} err="failed to get container status \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"29269c892242f36252907cd969671e55d9221e888cd222386305f94bc87f59b5\": not found" Feb 13 19:34:22.635269 kubelet[2681]: I0213 19:34:22.635253 2681 scope.go:117] "RemoveContainer" containerID="63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165" Feb 13 19:34:22.635452 containerd[1495]: time="2025-02-13T19:34:22.635399431Z" level=error msg="ContainerStatus for \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\": not found" Feb 13 19:34:22.635620 kubelet[2681]: E0213 19:34:22.635567 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\": not found" containerID="63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165" Feb 13 19:34:22.635620 kubelet[2681]: I0213 19:34:22.635585 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165"} err="failed to get container status \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\": rpc error: code = NotFound desc = an error occurred when try to find container \"63f9e6f7b487acb04aa785752e32a10210b58d0b06ffcb1cd3e422d2524f7165\": not found" Feb 13 19:34:22.635620 kubelet[2681]: I0213 19:34:22.635597 2681 scope.go:117] "RemoveContainer" containerID="6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46" Feb 13 19:34:22.638536 containerd[1495]: time="2025-02-13T19:34:22.636020406Z" level=error msg="ContainerStatus for \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\": not found" Feb 13 19:34:22.638598 kubelet[2681]: E0213 19:34:22.636384 2681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\": not found" containerID="6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46" Feb 13 19:34:22.638598 kubelet[2681]: I0213 19:34:22.636401 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46"} err="failed to get container status \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\": rpc error: code = NotFound desc = an error occurred when try to find container \"6416326141d9f9ee9e23abc80b9efc706eb5036dc1ef9e5626597ff8bdfa6b46\": not found" Feb 13 19:34:23.185807 sshd[4364]: Connection closed by 10.0.0.1 port 44032 Feb 13 19:34:23.186361 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:23.209358 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:44032.service: Deactivated successfully. Feb 13 19:34:23.211576 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:34:23.213342 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:34:23.222037 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:44046.service - OpenSSH per-connection server daemon (10.0.0.1:44046). Feb 13 19:34:23.223147 systemd-logind[1471]: Removed session 28. Feb 13 19:34:23.264678 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 44046 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:23.266181 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:23.271435 systemd-logind[1471]: New session 29 of user core. Feb 13 19:34:23.276667 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:34:23.856664 sshd[4528]: Connection closed by 10.0.0.1 port 44046 Feb 13 19:34:23.857084 sshd-session[4525]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:23.869091 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:44046.service: Deactivated successfully. Feb 13 19:34:23.871596 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:34:23.873366 systemd-logind[1471]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:34:23.878829 systemd[1]: Started sshd@29-10.0.0.22:22-10.0.0.1:44050.service - OpenSSH per-connection server daemon (10.0.0.1:44050). Feb 13 19:34:23.879819 systemd-logind[1471]: Removed session 29. Feb 13 19:34:23.912452 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 44050 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:23.914046 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:23.918682 systemd-logind[1471]: New session 30 of user core. Feb 13 19:34:23.937798 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:34:23.990943 sshd[4542]: Connection closed by 10.0.0.1 port 44050 Feb 13 19:34:23.991359 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:24.000775 systemd[1]: sshd@29-10.0.0.22:22-10.0.0.1:44050.service: Deactivated successfully. Feb 13 19:34:24.002830 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:34:24.004735 systemd-logind[1471]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:34:24.010841 systemd[1]: Started sshd@30-10.0.0.22:22-10.0.0.1:44058.service - OpenSSH per-connection server daemon (10.0.0.1:44058). Feb 13 19:34:24.011844 systemd-logind[1471]: Removed session 30. Feb 13 19:34:24.050154 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 44058 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:24.052103 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:24.056769 systemd-logind[1471]: New session 31 of user core. Feb 13 19:34:24.066798 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:34:24.140846 kubelet[2681]: I0213 19:34:24.139864 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" path="/var/lib/kubelet/pods/0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6/volumes" Feb 13 19:34:24.140846 kubelet[2681]: I0213 19:34:24.140822 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9716826f-40e7-4295-93c9-ec67bbb84691" path="/var/lib/kubelet/pods/9716826f-40e7-4295-93c9-ec67bbb84691/volumes" Feb 13 19:34:24.153049 kubelet[2681]: I0213 19:34:24.152767 2681 topology_manager.go:215] "Topology Admit Handler" podUID="ecdf0547-9539-4e30-ae04-669c9c981375" podNamespace="kube-system" podName="cilium-pbj6b" Feb 13 19:34:24.158567 kubelet[2681]: E0213 19:34:24.158535 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="clean-cilium-state" Feb 13 19:34:24.158567 kubelet[2681]: E0213 19:34:24.158565 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9716826f-40e7-4295-93c9-ec67bbb84691" containerName="cilium-operator" Feb 13 19:34:24.158663 kubelet[2681]: E0213 19:34:24.158575 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="mount-cgroup" Feb 13 19:34:24.158663 kubelet[2681]: E0213 19:34:24.158583 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="mount-bpf-fs" Feb 13 19:34:24.158663 kubelet[2681]: E0213 19:34:24.158591 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="apply-sysctl-overwrites" Feb 13 19:34:24.158663 kubelet[2681]: E0213 19:34:24.158600 2681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="cilium-agent" Feb 13 19:34:24.158663 kubelet[2681]: I0213 19:34:24.158630 2681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7d6744-5e0d-4db8-8323-bf47bcf1c7d6" containerName="cilium-agent" Feb 13 19:34:24.158663 kubelet[2681]: I0213 19:34:24.158637 2681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9716826f-40e7-4295-93c9-ec67bbb84691" containerName="cilium-operator" Feb 13 19:34:24.170953 systemd[1]: Created slice kubepods-burstable-podecdf0547_9539_4e30_ae04_669c9c981375.slice - libcontainer container kubepods-burstable-podecdf0547_9539_4e30_ae04_669c9c981375.slice. Feb 13 19:34:24.290950 kubelet[2681]: I0213 19:34:24.290894 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-bpf-maps\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.290950 kubelet[2681]: I0213 19:34:24.290948 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-cilium-cgroup\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.290971 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-host-proc-sys-net\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.290992 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-host-proc-sys-kernel\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.291014 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-cilium-run\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.291065 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-hostproc\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.291115 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecdf0547-9539-4e30-ae04-669c9c981375-clustermesh-secrets\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291160 kubelet[2681]: I0213 19:34:24.291134 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecdf0547-9539-4e30-ae04-669c9c981375-cilium-config-path\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291156 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-xtables-lock\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291179 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-etc-cni-netd\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291198 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-lib-modules\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291220 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6mgl\" (UniqueName: \"kubernetes.io/projected/ecdf0547-9539-4e30-ae04-669c9c981375-kube-api-access-f6mgl\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291253 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecdf0547-9539-4e30-ae04-669c9c981375-cni-path\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291339 kubelet[2681]: I0213 19:34:24.291283 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecdf0547-9539-4e30-ae04-669c9c981375-hubble-tls\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.291545 kubelet[2681]: I0213 19:34:24.291311 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ecdf0547-9539-4e30-ae04-669c9c981375-cilium-ipsec-secrets\") pod \"cilium-pbj6b\" (UID: \"ecdf0547-9539-4e30-ae04-669c9c981375\") " pod="kube-system/cilium-pbj6b" Feb 13 19:34:24.475100 kubelet[2681]: E0213 19:34:24.475032 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:24.475843 containerd[1495]: time="2025-02-13T19:34:24.475718216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbj6b,Uid:ecdf0547-9539-4e30-ae04-669c9c981375,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:24.505306 containerd[1495]: time="2025-02-13T19:34:24.502398387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:24.506326 containerd[1495]: time="2025-02-13T19:34:24.506156782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:24.506326 containerd[1495]: time="2025-02-13T19:34:24.506180056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:24.506326 containerd[1495]: time="2025-02-13T19:34:24.506287658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:24.534751 systemd[1]: Started cri-containerd-83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2.scope - libcontainer container 83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2. Feb 13 19:34:24.558185 containerd[1495]: time="2025-02-13T19:34:24.558134920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbj6b,Uid:ecdf0547-9539-4e30-ae04-669c9c981375,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\"" Feb 13 19:34:24.558907 kubelet[2681]: E0213 19:34:24.558880 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:24.561127 containerd[1495]: time="2025-02-13T19:34:24.561074347Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:34:24.579514 containerd[1495]: time="2025-02-13T19:34:24.579428690Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190\"" Feb 13 19:34:24.580072 containerd[1495]: time="2025-02-13T19:34:24.580027564Z" level=info msg="StartContainer for \"1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190\"" Feb 13 19:34:24.610984 systemd[1]: Started cri-containerd-1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190.scope - libcontainer container 1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190. Feb 13 19:34:24.698607 systemd[1]: cri-containerd-1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190.scope: Deactivated successfully. Feb 13 19:34:24.706644 containerd[1495]: time="2025-02-13T19:34:24.706600632Z" level=info msg="StartContainer for \"1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190\" returns successfully" Feb 13 19:34:25.046178 kubelet[2681]: E0213 19:34:25.046141 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:25.082519 containerd[1495]: time="2025-02-13T19:34:25.082423482Z" level=info msg="shim disconnected" id=1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190 namespace=k8s.io Feb 13 19:34:25.082519 containerd[1495]: time="2025-02-13T19:34:25.082478716Z" level=warning msg="cleaning up after shim disconnected" id=1137f86b5275498d195b3f91b5228373aa0ac0fba8aba3d4d295c4b54cbdd190 namespace=k8s.io Feb 13 19:34:25.082519 containerd[1495]: time="2025-02-13T19:34:25.082487242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:25.137803 kubelet[2681]: E0213 19:34:25.137759 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:25.192621 kubelet[2681]: E0213 19:34:25.192557 2681 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:34:26.048514 kubelet[2681]: E0213 19:34:26.048480 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:26.050420 containerd[1495]: time="2025-02-13T19:34:26.050336742Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:34:26.064460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319965645.mount: Deactivated successfully. Feb 13 19:34:26.067862 containerd[1495]: time="2025-02-13T19:34:26.067807074Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a\"" Feb 13 19:34:26.068570 containerd[1495]: time="2025-02-13T19:34:26.068532836Z" level=info msg="StartContainer for \"aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a\"" Feb 13 19:34:26.103734 systemd[1]: Started cri-containerd-aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a.scope - libcontainer container aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a. Feb 13 19:34:26.145219 systemd[1]: cri-containerd-aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a.scope: Deactivated successfully. Feb 13 19:34:26.226219 containerd[1495]: time="2025-02-13T19:34:26.226159537Z" level=info msg="StartContainer for \"aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a\" returns successfully" Feb 13 19:34:26.324029 containerd[1495]: time="2025-02-13T19:34:26.323859252Z" level=info msg="shim disconnected" id=aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a namespace=k8s.io Feb 13 19:34:26.324029 containerd[1495]: time="2025-02-13T19:34:26.323914145Z" level=warning msg="cleaning up after shim disconnected" id=aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a namespace=k8s.io Feb 13 19:34:26.324029 containerd[1495]: time="2025-02-13T19:34:26.323922621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:26.398381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aac800d4d1721741c379af33c71fabde399596b26558fe4c5c58c01b3508373a-rootfs.mount: Deactivated successfully. Feb 13 19:34:27.052496 kubelet[2681]: E0213 19:34:27.052461 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:27.054668 containerd[1495]: time="2025-02-13T19:34:27.054636264Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:34:27.538756 containerd[1495]: time="2025-02-13T19:34:27.538685678Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0\"" Feb 13 19:34:27.539483 containerd[1495]: time="2025-02-13T19:34:27.539337872Z" level=info msg="StartContainer for \"b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0\"" Feb 13 19:34:27.570766 systemd[1]: Started cri-containerd-b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0.scope - libcontainer container b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0. Feb 13 19:34:27.608072 systemd[1]: cri-containerd-b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0.scope: Deactivated successfully. Feb 13 19:34:27.722760 containerd[1495]: time="2025-02-13T19:34:27.722710063Z" level=info msg="StartContainer for \"b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0\" returns successfully" Feb 13 19:34:27.744351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0-rootfs.mount: Deactivated successfully. Feb 13 19:34:27.982593 containerd[1495]: time="2025-02-13T19:34:27.982529897Z" level=info msg="shim disconnected" id=b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0 namespace=k8s.io Feb 13 19:34:27.982593 containerd[1495]: time="2025-02-13T19:34:27.982586643Z" level=warning msg="cleaning up after shim disconnected" id=b4d51ccb0d63fa3dfb5ab115cab83be586e833cb89e49f7a9da06d6adf488bb0 namespace=k8s.io Feb 13 19:34:27.982593 containerd[1495]: time="2025-02-13T19:34:27.982596973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:28.166352 kubelet[2681]: E0213 19:34:28.166310 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:28.168025 containerd[1495]: time="2025-02-13T19:34:28.167989211Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:34:28.567484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342690076.mount: Deactivated successfully. Feb 13 19:34:29.030640 containerd[1495]: time="2025-02-13T19:34:29.030563646Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca\"" Feb 13 19:34:29.031476 containerd[1495]: time="2025-02-13T19:34:29.031430063Z" level=info msg="StartContainer for \"077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca\"" Feb 13 19:34:29.066761 systemd[1]: Started cri-containerd-077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca.scope - libcontainer container 077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca. Feb 13 19:34:29.093371 systemd[1]: cri-containerd-077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca.scope: Deactivated successfully. Feb 13 19:34:29.231896 containerd[1495]: time="2025-02-13T19:34:29.231832640Z" level=info msg="StartContainer for \"077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca\" returns successfully" Feb 13 19:34:29.236569 kubelet[2681]: E0213 19:34:29.236536 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:29.451643 containerd[1495]: time="2025-02-13T19:34:29.451456625Z" level=info msg="shim disconnected" id=077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca namespace=k8s.io Feb 13 19:34:29.451643 containerd[1495]: time="2025-02-13T19:34:29.451543508Z" level=warning msg="cleaning up after shim disconnected" id=077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca namespace=k8s.io Feb 13 19:34:29.451643 containerd[1495]: time="2025-02-13T19:34:29.451555200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:29.564763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-077067202651c1df856374c866c827967957cadd955746120a7720658847c5ca-rootfs.mount: Deactivated successfully. Feb 13 19:34:30.194035 kubelet[2681]: E0213 19:34:30.193992 2681 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:34:30.240407 kubelet[2681]: E0213 19:34:30.240357 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:30.242454 containerd[1495]: time="2025-02-13T19:34:30.242386301Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:34:30.258577 containerd[1495]: time="2025-02-13T19:34:30.258522308Z" level=info msg="CreateContainer within sandbox \"83c9f06b40a9b2c25f606dc850095d10f7a39c9073d214be4a2b67fd626678c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a4ab6253530a2f54468b81b0235290b35fd1390e4e1306be07a017e3963d8d5\"" Feb 13 19:34:30.259180 containerd[1495]: time="2025-02-13T19:34:30.259128867Z" level=info msg="StartContainer for \"8a4ab6253530a2f54468b81b0235290b35fd1390e4e1306be07a017e3963d8d5\"" Feb 13 19:34:30.291716 systemd[1]: Started cri-containerd-8a4ab6253530a2f54468b81b0235290b35fd1390e4e1306be07a017e3963d8d5.scope - libcontainer container 8a4ab6253530a2f54468b81b0235290b35fd1390e4e1306be07a017e3963d8d5. Feb 13 19:34:30.322626 containerd[1495]: time="2025-02-13T19:34:30.322567306Z" level=info msg="StartContainer for \"8a4ab6253530a2f54468b81b0235290b35fd1390e4e1306be07a017e3963d8d5\" returns successfully" Feb 13 19:34:30.772548 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:34:30.805546 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Feb 13 19:34:30.857796 kernel: DRBG: Continuing without Jitter RNG Feb 13 19:34:31.245315 kubelet[2681]: E0213 19:34:31.245280 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:31.372868 kubelet[2681]: I0213 19:34:31.372715 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pbj6b" podStartSLOduration=8.372691853 podStartE2EDuration="8.372691853s" podCreationTimestamp="2025-02-13 19:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:31.372210511 +0000 UTC m=+101.338431987" watchObservedRunningTime="2025-02-13 19:34:31.372691853 +0000 UTC m=+101.338913299" Feb 13 19:34:32.476599 kubelet[2681]: E0213 19:34:32.476533 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:32.665828 kubelet[2681]: I0213 19:34:32.665772 2681 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:34:32Z","lastTransitionTime":"2025-02-13T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:34:34.126920 systemd-networkd[1407]: lxc_health: Link UP Feb 13 19:34:34.134793 systemd-networkd[1407]: lxc_health: Gained carrier Feb 13 19:34:34.477643 kubelet[2681]: E0213 19:34:34.477610 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:35.255515 kubelet[2681]: E0213 19:34:35.255460 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:35.623697 systemd-networkd[1407]: lxc_health: Gained IPv6LL Feb 13 19:34:36.257278 kubelet[2681]: E0213 19:34:36.257230 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.409908 sshd[4550]: Connection closed by 10.0.0.1 port 44058 Feb 13 19:34:39.410310 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:39.414864 systemd[1]: sshd@30-10.0.0.22:22-10.0.0.1:44058.service: Deactivated successfully. Feb 13 19:34:39.416774 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:34:39.417353 systemd-logind[1471]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:34:39.418280 systemd-logind[1471]: Removed session 31.