Feb 13 15:32:36.875060 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:32:36.875081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:32:36.875093 kernel: BIOS-provided physical RAM map: Feb 13 15:32:36.875099 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:32:36.875105 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:32:36.875111 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:32:36.875118 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:32:36.875124 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:32:36.875130 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:32:36.875139 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:32:36.875145 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:32:36.875151 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:32:36.875157 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:32:36.875164 kernel: NX (Execute Disable) protection: active Feb 13 15:32:36.875171 kernel: APIC: Static calls initialized Feb 13 15:32:36.875180 kernel: SMBIOS 2.8 present. Feb 13 15:32:36.875187 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:32:36.875194 kernel: Hypervisor detected: KVM Feb 13 15:32:36.875201 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:32:36.875207 kernel: kvm-clock: using sched offset of 2266977692 cycles Feb 13 15:32:36.875214 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:32:36.875221 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:32:36.875228 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:32:36.875236 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:32:36.875242 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:32:36.875252 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:32:36.875259 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:32:36.875266 kernel: Using GB pages for direct mapping Feb 13 15:32:36.875272 kernel: ACPI: Early table checksum verification disabled Feb 13 15:32:36.875279 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:32:36.875286 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875293 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875300 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875309 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:32:36.875316 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875323 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875329 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875336 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:32:36.875343 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:32:36.875350 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:32:36.875360 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:32:36.875369 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:32:36.875377 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:32:36.875384 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:32:36.875391 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:32:36.875398 kernel: No NUMA configuration found Feb 13 15:32:36.875405 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:32:36.875412 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:32:36.875421 kernel: Zone ranges: Feb 13 15:32:36.875428 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:32:36.875435 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:32:36.875442 kernel: Normal empty Feb 13 15:32:36.875449 kernel: Movable zone start for each node Feb 13 15:32:36.875456 kernel: Early memory node ranges Feb 13 15:32:36.875463 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:32:36.875470 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:32:36.875477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:32:36.875487 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:32:36.875494 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:32:36.875501 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:32:36.875508 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:32:36.875515 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:32:36.875522 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:32:36.875529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:32:36.875536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:32:36.875543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:32:36.875553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:32:36.875560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:32:36.875567 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:32:36.875574 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:32:36.875581 kernel: TSC deadline timer available Feb 13 15:32:36.875588 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:32:36.875595 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:32:36.875603 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:32:36.875610 kernel: kvm-guest: setup PV sched yield Feb 13 15:32:36.875617 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:32:36.875626 kernel: Booting paravirtualized kernel on KVM Feb 13 15:32:36.875633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:32:36.875640 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:32:36.875647 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:32:36.875655 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:32:36.875661 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:32:36.875668 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:32:36.875675 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:32:36.875684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:32:36.875694 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:32:36.875701 kernel: random: crng init done Feb 13 15:32:36.875708 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:32:36.875722 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:32:36.875730 kernel: Fallback order for Node 0: 0 Feb 13 15:32:36.875737 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:32:36.875744 kernel: Policy zone: DMA32 Feb 13 15:32:36.875751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:32:36.875761 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Feb 13 15:32:36.875768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:32:36.875775 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:32:36.875782 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:32:36.875789 kernel: Dynamic Preempt: voluntary Feb 13 15:32:36.875796 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:32:36.875807 kernel: rcu: RCU event tracing is enabled. Feb 13 15:32:36.875815 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:32:36.875822 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:32:36.875832 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:32:36.875839 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:32:36.875846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:32:36.875853 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:32:36.875860 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:32:36.875867 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:32:36.875874 kernel: Console: colour VGA+ 80x25 Feb 13 15:32:36.875881 kernel: printk: console [ttyS0] enabled Feb 13 15:32:36.875888 kernel: ACPI: Core revision 20230628 Feb 13 15:32:36.875898 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:32:36.875905 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:32:36.875912 kernel: x2apic enabled Feb 13 15:32:36.875920 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:32:36.875927 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:32:36.875934 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:32:36.875941 kernel: kvm-guest: setup PV IPIs Feb 13 15:32:36.875958 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:32:36.875965 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:32:36.875973 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:32:36.876014 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:32:36.876022 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:32:36.876032 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:32:36.876039 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:32:36.876047 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:32:36.876054 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:32:36.876062 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:32:36.876071 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:32:36.876079 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:32:36.876086 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:32:36.876094 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:32:36.876101 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:32:36.876109 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:32:36.876117 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:32:36.876124 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:32:36.876135 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:32:36.876142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:32:36.876149 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:32:36.876157 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:32:36.876164 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:32:36.876172 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:32:36.876179 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:32:36.876187 kernel: landlock: Up and running. Feb 13 15:32:36.876194 kernel: SELinux: Initializing. Feb 13 15:32:36.876204 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:32:36.876212 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:32:36.876219 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:32:36.876227 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:32:36.876235 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:32:36.876242 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:32:36.876250 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:32:36.876257 kernel: ... version: 0 Feb 13 15:32:36.876267 kernel: ... bit width: 48 Feb 13 15:32:36.876274 kernel: ... generic registers: 6 Feb 13 15:32:36.876282 kernel: ... value mask: 0000ffffffffffff Feb 13 15:32:36.876289 kernel: ... max period: 00007fffffffffff Feb 13 15:32:36.876297 kernel: ... fixed-purpose events: 0 Feb 13 15:32:36.876304 kernel: ... event mask: 000000000000003f Feb 13 15:32:36.876311 kernel: signal: max sigframe size: 1776 Feb 13 15:32:36.876319 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:32:36.876326 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:32:36.876334 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:32:36.876343 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:32:36.876351 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:32:36.876358 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:32:36.876365 kernel: smpboot: Max logical packages: 1 Feb 13 15:32:36.876373 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:32:36.876380 kernel: devtmpfs: initialized Feb 13 15:32:36.876388 kernel: x86/mm: Memory block size: 128MB Feb 13 15:32:36.876395 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:32:36.876403 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:32:36.876412 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:32:36.876420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:32:36.876427 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:32:36.876435 kernel: audit: type=2000 audit(1739460757.260:1): state=initialized audit_enabled=0 res=1 Feb 13 15:32:36.876442 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:32:36.876449 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:32:36.876457 kernel: cpuidle: using governor menu Feb 13 15:32:36.876464 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:32:36.876471 kernel: dca service started, version 1.12.1 Feb 13 15:32:36.876481 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:32:36.876489 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:32:36.876496 kernel: PCI: Using configuration type 1 for base access Feb 13 15:32:36.876504 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:32:36.876511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:32:36.876519 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:32:36.876526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:32:36.876534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:32:36.876541 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:32:36.876551 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:32:36.876558 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:32:36.876566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:32:36.876573 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:32:36.876580 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:32:36.876588 kernel: ACPI: Interpreter enabled Feb 13 15:32:36.876595 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:32:36.876603 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:32:36.876610 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:32:36.876620 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:32:36.876628 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:32:36.876635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:32:36.876813 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:32:36.876949 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:32:36.877089 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:32:36.877099 kernel: PCI host bridge to bus 0000:00 Feb 13 15:32:36.877227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:32:36.877339 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:32:36.877449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:32:36.877558 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:32:36.877667 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:32:36.877788 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:32:36.877899 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:32:36.878053 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:32:36.878189 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:32:36.878310 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:32:36.878430 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:32:36.878549 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:32:36.878668 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:32:36.878808 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:32:36.878937 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:32:36.879131 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:32:36.879303 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:32:36.879433 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:32:36.879555 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:32:36.879677 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:32:36.879811 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:32:36.879942 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:32:36.880080 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:32:36.880202 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:32:36.880323 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:32:36.880445 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:32:36.880572 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:32:36.880698 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:32:36.880878 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:32:36.881045 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:32:36.881201 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:32:36.881360 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:32:36.881484 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:32:36.881494 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:32:36.881507 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:32:36.881514 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:32:36.881522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:32:36.881529 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:32:36.881537 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:32:36.881544 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:32:36.881552 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:32:36.881559 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:32:36.881567 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:32:36.881577 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:32:36.881584 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:32:36.881591 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:32:36.881599 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:32:36.881606 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:32:36.881614 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:32:36.881621 kernel: iommu: Default domain type: Translated Feb 13 15:32:36.881629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:32:36.881636 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:32:36.881646 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:32:36.881653 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:32:36.881661 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:32:36.881791 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:32:36.881911 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:32:36.882049 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:32:36.882059 kernel: vgaarb: loaded Feb 13 15:32:36.882067 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:32:36.882078 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:32:36.882086 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:32:36.882093 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:32:36.882101 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:32:36.882108 kernel: pnp: PnP ACPI init Feb 13 15:32:36.882243 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:32:36.882253 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:32:36.882261 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:32:36.882272 kernel: NET: Registered PF_INET protocol family Feb 13 15:32:36.882279 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:32:36.882287 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:32:36.882295 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:32:36.882302 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:32:36.882310 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:32:36.882317 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:32:36.882325 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:32:36.882333 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:32:36.882343 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:32:36.882350 kernel: NET: Registered PF_XDP protocol family Feb 13 15:32:36.882463 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:32:36.882573 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:32:36.882705 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:32:36.882858 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:32:36.882991 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:32:36.883105 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:32:36.883118 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:32:36.883126 kernel: Initialise system trusted keyrings Feb 13 15:32:36.883134 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:32:36.883141 kernel: Key type asymmetric registered Feb 13 15:32:36.883149 kernel: Asymmetric key parser 'x509' registered Feb 13 15:32:36.883157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:32:36.883164 kernel: io scheduler mq-deadline registered Feb 13 15:32:36.883172 kernel: io scheduler kyber registered Feb 13 15:32:36.883179 kernel: io scheduler bfq registered Feb 13 15:32:36.883189 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:32:36.883197 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:32:36.883204 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:32:36.883212 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:32:36.883219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:32:36.883227 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:32:36.883235 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:32:36.883243 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:32:36.883250 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:32:36.883260 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:32:36.883382 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:32:36.883496 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:32:36.883609 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:32:36 UTC (1739460756) Feb 13 15:32:36.883732 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:32:36.883742 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:32:36.883749 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:32:36.883757 kernel: Segment Routing with IPv6 Feb 13 15:32:36.883768 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:32:36.883776 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:32:36.883784 kernel: Key type dns_resolver registered Feb 13 15:32:36.883791 kernel: IPI shorthand broadcast: enabled Feb 13 15:32:36.883799 kernel: sched_clock: Marking stable (578003193, 104083080)->(694727111, -12640838) Feb 13 15:32:36.883806 kernel: registered taskstats version 1 Feb 13 15:32:36.883814 kernel: Loading compiled-in X.509 certificates Feb 13 15:32:36.883821 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:32:36.883829 kernel: Key type .fscrypt registered Feb 13 15:32:36.883838 kernel: Key type fscrypt-provisioning registered Feb 13 15:32:36.883846 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:32:36.883861 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:32:36.883870 kernel: ima: No architecture policies found Feb 13 15:32:36.883884 kernel: clk: Disabling unused clocks Feb 13 15:32:36.883898 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:32:36.883906 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:32:36.883914 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:32:36.883921 kernel: Run /init as init process Feb 13 15:32:36.883931 kernel: with arguments: Feb 13 15:32:36.883939 kernel: /init Feb 13 15:32:36.883946 kernel: with environment: Feb 13 15:32:36.883953 kernel: HOME=/ Feb 13 15:32:36.883960 kernel: TERM=linux Feb 13 15:32:36.883971 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:32:36.883994 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:32:36.884004 systemd[1]: Detected virtualization kvm. Feb 13 15:32:36.884015 systemd[1]: Detected architecture x86-64. Feb 13 15:32:36.884023 systemd[1]: Running in initrd. Feb 13 15:32:36.884030 systemd[1]: No hostname configured, using default hostname. Feb 13 15:32:36.884038 systemd[1]: Hostname set to . Feb 13 15:32:36.884047 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:32:36.884054 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:32:36.884062 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:36.884070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:36.884082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:32:36.884102 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:32:36.884113 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:32:36.884121 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:32:36.884131 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:32:36.884142 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:32:36.884150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:36.884159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:36.884167 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:32:36.884175 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:32:36.884183 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:32:36.884191 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:32:36.884199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:32:36.884210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:32:36.884219 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:32:36.884227 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:32:36.884235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:36.884244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:36.884252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:36.884260 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:32:36.884269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:32:36.884279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:32:36.884290 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:32:36.884298 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:32:36.884306 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:32:36.884315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:32:36.884323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:36.884331 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:32:36.884339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:36.884347 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:32:36.884377 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:32:36.884398 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:32:36.884409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:36.884418 systemd-journald[194]: Journal started Feb 13 15:32:36.884438 systemd-journald[194]: Runtime Journal (/run/log/journal/097729b805ee4ff9801cd642ed566086) is 6.0M, max 48.4M, 42.3M free. Feb 13 15:32:36.881324 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:32:36.915509 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:32:36.915531 kernel: Bridge firewalling registered Feb 13 15:32:36.915542 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:32:36.910490 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:32:36.917239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:36.926219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:36.928935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:32:36.931657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:32:36.933102 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:36.936402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:36.938469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:36.944007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:36.954698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:36.956199 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:32:36.968150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:36.971294 dracut-cmdline[229]: dracut-dracut-053 Feb 13 15:32:36.974065 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:32:36.980178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:32:37.010584 systemd-resolved[240]: Positive Trust Anchors: Feb 13 15:32:37.010601 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:32:37.010632 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:32:37.013169 systemd-resolved[240]: Defaulting to hostname 'linux'. Feb 13 15:32:37.014215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:32:37.020338 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:37.052011 kernel: SCSI subsystem initialized Feb 13 15:32:37.061007 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:32:37.072024 kernel: iscsi: registered transport (tcp) Feb 13 15:32:37.093003 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:32:37.093027 kernel: QLogic iSCSI HBA Driver Feb 13 15:32:37.140231 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:32:37.152110 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:32:37.177073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:32:37.177106 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:32:37.178142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:32:37.225011 kernel: raid6: avx2x4 gen() 30361 MB/s Feb 13 15:32:37.241999 kernel: raid6: avx2x2 gen() 28861 MB/s Feb 13 15:32:37.259102 kernel: raid6: avx2x1 gen() 25716 MB/s Feb 13 15:32:37.259118 kernel: raid6: using algorithm avx2x4 gen() 30361 MB/s Feb 13 15:32:37.277097 kernel: raid6: .... xor() 8060 MB/s, rmw enabled Feb 13 15:32:37.277111 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:32:37.297003 kernel: xor: automatically using best checksumming function avx Feb 13 15:32:37.454013 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:32:37.468575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:32:37.478172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:37.490465 systemd-udevd[416]: Using default interface naming scheme 'v255'. Feb 13 15:32:37.495358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:37.507149 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:32:37.523498 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 13 15:32:37.553742 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:32:37.563147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:32:37.625545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:37.637145 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:32:37.648936 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:32:37.651781 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:32:37.654581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:37.655801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:32:37.661059 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:32:37.697033 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:32:37.697214 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:32:37.697229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:32:37.697243 kernel: GPT:9289727 != 19775487 Feb 13 15:32:37.697257 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:32:37.697271 kernel: GPT:9289727 != 19775487 Feb 13 15:32:37.697290 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:32:37.697304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:37.669183 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:32:37.686944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:32:37.699756 kernel: libata version 3.00 loaded. Feb 13 15:32:37.702569 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:32:37.702602 kernel: AES CTR mode by8 optimization enabled Feb 13 15:32:37.703779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:32:37.704855 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:37.709416 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:32:37.742593 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:32:37.742612 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:32:37.742769 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:32:37.742905 kernel: scsi host0: ahci Feb 13 15:32:37.743583 kernel: scsi host1: ahci Feb 13 15:32:37.743741 kernel: scsi host2: ahci Feb 13 15:32:37.743889 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 15:32:37.743901 kernel: scsi host3: ahci Feb 13 15:32:37.744054 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (465) Feb 13 15:32:37.744065 kernel: scsi host4: ahci Feb 13 15:32:37.744237 kernel: scsi host5: ahci Feb 13 15:32:37.744376 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:32:37.744388 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:32:37.744402 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:32:37.744413 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:32:37.744423 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:32:37.744433 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:32:37.707295 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:37.712015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:37.712555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:37.714280 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:37.724349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:37.757391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:32:37.781489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:37.788379 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:32:37.793348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:32:37.798304 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:32:37.806803 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:32:37.817114 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:32:37.820023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:32:37.841881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:37.883100 disk-uuid[553]: Primary Header is updated. Feb 13 15:32:37.883100 disk-uuid[553]: Secondary Entries is updated. Feb 13 15:32:37.883100 disk-uuid[553]: Secondary Header is updated. Feb 13 15:32:37.886997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:37.891996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:38.053000 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:32:38.053090 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:32:38.053107 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:32:38.053119 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:32:38.053998 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:32:38.055000 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:32:38.056007 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:32:38.056019 kernel: ata3.00: applying bridge limits Feb 13 15:32:38.057003 kernel: ata3.00: configured for UDMA/100 Feb 13 15:32:38.058998 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:32:38.106005 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:32:38.118536 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:32:38.118554 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:32:38.893031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:32:38.893338 disk-uuid[564]: The operation has completed successfully. Feb 13 15:32:38.924179 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:32:38.924343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:32:38.956138 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:32:38.960171 sh[591]: Success Feb 13 15:32:38.973009 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:32:39.006478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:32:39.022566 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:32:39.025641 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:32:39.041709 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:32:39.041779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:32:39.041795 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:32:39.042720 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:32:39.043576 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:32:39.049166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:32:39.051785 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:32:39.063154 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:32:39.064835 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:32:39.074591 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:32:39.074629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:32:39.074641 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:32:39.077039 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:32:39.086110 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:32:39.088182 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:32:39.097141 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:32:39.106139 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:32:39.164683 ignition[685]: Ignition 2.20.0 Feb 13 15:32:39.164694 ignition[685]: Stage: fetch-offline Feb 13 15:32:39.164741 ignition[685]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:39.164754 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:39.164868 ignition[685]: parsed url from cmdline: "" Feb 13 15:32:39.164873 ignition[685]: no config URL provided Feb 13 15:32:39.164878 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:32:39.164888 ignition[685]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:32:39.164919 ignition[685]: op(1): [started] loading QEMU firmware config module Feb 13 15:32:39.164925 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:32:39.172941 ignition[685]: op(1): [finished] loading QEMU firmware config module Feb 13 15:32:39.183339 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:32:39.192139 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:32:39.214333 systemd-networkd[779]: lo: Link UP Feb 13 15:32:39.214342 systemd-networkd[779]: lo: Gained carrier Feb 13 15:32:39.215866 systemd-networkd[779]: Enumeration completed Feb 13 15:32:39.216334 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:39.216338 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:39.216565 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:32:39.217676 systemd-networkd[779]: eth0: Link UP Feb 13 15:32:39.217680 systemd-networkd[779]: eth0: Gained carrier Feb 13 15:32:39.217689 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:39.218819 systemd[1]: Reached target network.target - Network. Feb 13 15:32:39.231773 ignition[685]: parsing config with SHA512: b5a576ccdb271a90f74ded78b3b31a20f61c7be51224937af1c0498e6ce857c4c287904f7c865ad0d2171186371426dc49ffeae931be97c96b3e67f970aca755 Feb 13 15:32:39.238433 unknown[685]: fetched base config from "system" Feb 13 15:32:39.238445 unknown[685]: fetched user config from "qemu" Feb 13 15:32:39.240030 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:32:39.241570 ignition[685]: fetch-offline: fetch-offline passed Feb 13 15:32:39.242470 ignition[685]: Ignition finished successfully Feb 13 15:32:39.245381 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:32:39.260890 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:32:39.278121 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:32:39.292128 ignition[783]: Ignition 2.20.0 Feb 13 15:32:39.292141 ignition[783]: Stage: kargs Feb 13 15:32:39.292299 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:39.292311 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:39.293210 ignition[783]: kargs: kargs passed Feb 13 15:32:39.293260 ignition[783]: Ignition finished successfully Feb 13 15:32:39.299684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:32:39.312224 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:32:39.457455 ignition[793]: Ignition 2.20.0 Feb 13 15:32:39.457467 ignition[793]: Stage: disks Feb 13 15:32:39.457624 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:39.457636 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:39.458412 ignition[793]: disks: disks passed Feb 13 15:32:39.458455 ignition[793]: Ignition finished successfully Feb 13 15:32:39.461025 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:32:39.461702 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:32:39.463707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:32:39.464206 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:32:39.464538 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:32:39.464890 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:32:39.484234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:32:39.496608 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:32:39.504266 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:32:39.516135 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:32:39.606020 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:32:39.606927 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:32:39.608529 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:32:39.624323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:32:39.627062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:32:39.628087 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:32:39.628143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:32:39.642132 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Feb 13 15:32:39.642160 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:32:39.642172 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:32:39.642182 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:32:39.628174 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:32:39.636178 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:32:39.647142 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:32:39.643449 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:32:39.649430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:32:39.705062 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:32:39.711081 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:32:39.716029 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:32:39.721341 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:32:39.813187 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:32:39.830176 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:32:39.833297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:32:39.838999 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:32:39.861292 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:32:39.877110 ignition[925]: INFO : Ignition 2.20.0 Feb 13 15:32:39.877110 ignition[925]: INFO : Stage: mount Feb 13 15:32:39.879178 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:39.879178 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:39.879178 ignition[925]: INFO : mount: mount passed Feb 13 15:32:39.879178 ignition[925]: INFO : Ignition finished successfully Feb 13 15:32:39.880948 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:32:39.895080 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:32:40.041053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:32:40.053274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:32:40.060521 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Feb 13 15:32:40.060565 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:32:40.060580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:32:40.061994 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:32:40.065005 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:32:40.066185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:32:40.146799 ignition[956]: INFO : Ignition 2.20.0 Feb 13 15:32:40.146799 ignition[956]: INFO : Stage: files Feb 13 15:32:40.148589 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:40.148589 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:40.151142 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:32:40.152799 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:32:40.152799 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:32:40.155683 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:32:40.155683 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:32:40.158932 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:32:40.158932 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:32:40.158932 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:32:40.156645 unknown[956]: wrote ssh authorized keys file for user: core Feb 13 15:32:40.199449 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:32:40.305894 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:32:40.305894 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:32:40.310185 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:32:40.673045 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:32:40.763876 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:32:40.763876 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:32:40.768062 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:32:40.869171 systemd-networkd[779]: eth0: Gained IPv6LL Feb 13 15:32:41.050718 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:32:41.577508 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:32:41.577508 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:32:41.581834 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:32:41.584382 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:32:41.584382 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:32:41.584382 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:32:41.589444 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:32:41.591624 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:32:41.591624 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:32:41.591624 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:32:41.620431 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:32:41.625663 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:32:41.627649 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:32:41.627649 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:32:41.630942 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:32:41.632528 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:32:41.634664 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:32:41.636659 ignition[956]: INFO : files: files passed Feb 13 15:32:41.636659 ignition[956]: INFO : Ignition finished successfully Feb 13 15:32:41.638619 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:32:41.651248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:32:41.654736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:32:41.655354 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:32:41.655466 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:32:41.682938 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:32:41.687510 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:41.687510 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:41.691249 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:41.694670 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:32:41.696153 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:32:41.707104 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:32:41.730680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:32:41.730822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:32:41.733074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:32:41.733448 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:32:41.733817 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:32:41.739432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:32:41.759662 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:32:41.769158 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:32:41.777664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:41.778957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:41.781263 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:32:41.783313 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:32:41.783423 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:32:41.785764 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:32:41.787504 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:32:41.789551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:32:41.791690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:32:41.793656 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:32:41.795799 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:32:41.797908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:32:41.800243 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:32:41.802314 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:32:41.804663 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:32:41.806476 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:32:41.806629 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:32:41.808712 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:41.810298 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:41.812403 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:32:41.812515 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:41.814595 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:32:41.814704 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:32:41.816920 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:32:41.817037 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:32:41.819085 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:32:41.820872 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:32:41.825036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:41.826522 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:32:41.828499 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:32:41.830340 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:32:41.830432 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:32:41.832374 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:32:41.832461 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:32:41.834860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:32:41.834971 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:32:41.836909 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:32:41.837024 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:32:41.851154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:32:41.853088 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:32:41.853948 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:32:41.854085 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:41.856244 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:32:41.856427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:32:41.861637 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:32:41.861752 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:32:41.865631 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 15:32:41.865631 ignition[1011]: INFO : Stage: umount Feb 13 15:32:41.865631 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:41.865631 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:32:41.870526 ignition[1011]: INFO : umount: umount passed Feb 13 15:32:41.870526 ignition[1011]: INFO : Ignition finished successfully Feb 13 15:32:41.868198 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:32:41.868307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:32:41.871054 systemd[1]: Stopped target network.target - Network. Feb 13 15:32:41.872270 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:32:41.872329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:32:41.874257 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:32:41.874310 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:32:41.876315 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:32:41.876362 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:32:41.878432 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:32:41.878481 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:32:41.880621 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:32:41.883006 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:32:41.886316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:32:41.892204 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:32:41.892359 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:32:41.894047 systemd-networkd[779]: eth0: DHCPv6 lease lost Feb 13 15:32:41.896108 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:32:41.896173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:41.898772 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:32:41.898904 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:32:41.901637 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:32:41.901705 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:41.909074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:32:41.910507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:32:41.910563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:32:41.912944 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:32:41.913002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:41.916510 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:32:41.916556 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:41.922401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:41.933049 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:32:41.934143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:32:41.952653 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:32:41.953690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:41.956355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:32:41.957384 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:41.959435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:32:41.960381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:41.962425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:32:41.963332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:32:41.965419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:32:41.966317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:32:41.968317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:32:41.969342 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:41.981095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:32:41.983334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:32:41.983388 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:41.985648 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:32:41.986918 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:41.990598 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:32:41.991531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:41.993867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:41.994857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:41.997294 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:32:41.998365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:32:42.342303 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:32:42.343324 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:32:42.345317 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:32:42.347293 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:32:42.348274 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:32:42.360123 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:32:42.368703 systemd[1]: Switching root. Feb 13 15:32:42.396408 systemd-journald[194]: Journal stopped Feb 13 15:32:43.882318 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:32:43.882397 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:32:43.882419 kernel: SELinux: policy capability open_perms=1 Feb 13 15:32:43.882433 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:32:43.882452 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:32:43.882467 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:32:43.882482 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:32:43.882496 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:32:43.882511 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:32:43.882538 kernel: audit: type=1403 audit(1739460763.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:32:43.882557 systemd[1]: Successfully loaded SELinux policy in 82.883ms. Feb 13 15:32:43.882584 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.672ms. Feb 13 15:32:43.882601 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:32:43.882617 systemd[1]: Detected virtualization kvm. Feb 13 15:32:43.882633 systemd[1]: Detected architecture x86-64. Feb 13 15:32:43.882649 systemd[1]: Detected first boot. Feb 13 15:32:43.882665 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:32:43.882680 zram_generator::config[1054]: No configuration found. Feb 13 15:32:43.882701 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:32:43.882718 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:32:43.882740 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:32:43.882756 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:32:43.882772 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:32:43.882788 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:32:43.882803 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:32:43.882819 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:32:43.882838 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:32:43.882854 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:32:43.882869 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:32:43.882885 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:32:43.882901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:43.882919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:43.882935 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:32:43.882951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:32:43.882971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:32:43.883003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:32:43.883020 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:32:43.883035 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:43.883051 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:32:43.883068 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:32:43.883091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:32:43.883107 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:32:43.883126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:43.883143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:32:43.883158 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:32:43.883174 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:32:43.883190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:32:43.883206 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:32:43.883222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:43.883238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:43.883255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:43.883270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:32:43.883292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:32:43.883308 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:32:43.883324 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:32:43.883341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:43.883356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:32:43.883372 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:32:43.883387 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:32:43.883403 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:32:43.883422 systemd[1]: Reached target machines.target - Containers. Feb 13 15:32:43.883438 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:32:43.883454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:43.883469 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:32:43.883485 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:32:43.883501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:43.883517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:32:43.883544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:43.883564 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:32:43.883586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:43.883757 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:32:43.883783 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:32:43.883800 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:32:43.883817 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:32:43.883833 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:32:43.883849 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:32:43.883865 kernel: loop: module loaded Feb 13 15:32:43.883885 kernel: fuse: init (API version 7.39) Feb 13 15:32:43.883923 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 15:32:43.883952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:32:43.883968 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:32:43.886013 systemd-journald[1124]: Journal started Feb 13 15:32:43.886040 systemd-journald[1124]: Runtime Journal (/run/log/journal/097729b805ee4ff9801cd642ed566086) is 6.0M, max 48.4M, 42.3M free. Feb 13 15:32:43.673098 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:32:43.692088 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:32:43.692510 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:32:43.891091 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:32:43.898839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:32:43.898889 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:32:43.900551 systemd[1]: Stopped verity-setup.service. Feb 13 15:32:43.904024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:43.909193 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:32:43.910198 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:32:43.911672 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:32:43.913020 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:32:43.914550 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:32:43.916237 kernel: ACPI: bus type drm_connector registered Feb 13 15:32:43.916717 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:32:43.918353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:32:43.919842 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:32:43.921766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:43.923673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:32:43.923881 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:32:43.925719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:43.925929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:43.927844 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:32:43.928066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:32:43.929905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:43.930129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:43.932017 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:32:43.932225 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:32:43.934033 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:43.934239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:43.936006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:43.937458 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:32:43.939045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:32:43.951824 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:32:43.963082 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:32:43.965289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:32:43.966394 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:32:43.966419 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:32:43.968330 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:32:43.970509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:32:43.976183 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:32:43.977797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:43.980535 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:32:43.983382 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:32:43.984745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:32:43.989145 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:32:43.990721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:32:43.996231 systemd-journald[1124]: Time spent on flushing to /var/log/journal/097729b805ee4ff9801cd642ed566086 is 17.675ms for 953 entries. Feb 13 15:32:43.996231 systemd-journald[1124]: System Journal (/var/log/journal/097729b805ee4ff9801cd642ed566086) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:32:44.026279 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 15:32:43.995100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:43.997606 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:32:44.002241 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:32:44.007532 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:32:44.009046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:32:44.012765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:32:44.015534 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:32:44.029836 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:44.029994 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:32:44.031890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:32:44.033970 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:44.038493 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:32:44.049196 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:32:44.053358 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:32:44.054474 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:32:44.058094 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 15:32:44.058108 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 15:32:44.065615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:44.085212 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:32:44.087246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:32:44.088045 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:32:44.090759 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:32:44.092053 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:32:44.120200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:32:44.124570 kernel: loop2: detected capacity change from 0 to 211296 Feb 13 15:32:44.131663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:32:44.150921 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:32:44.150942 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:32:44.153001 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 15:32:44.156959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:44.165997 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:32:44.179036 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:32:44.186491 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:32:44.187072 (sd-merge)[1195]: Merged extensions into '/usr'. Feb 13 15:32:44.190456 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:32:44.190559 systemd[1]: Reloading... Feb 13 15:32:44.241836 zram_generator::config[1219]: No configuration found. Feb 13 15:32:44.307108 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:32:44.361766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:44.410420 systemd[1]: Reloading finished in 219 ms. Feb 13 15:32:44.445223 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:32:44.446791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:32:44.460130 systemd[1]: Starting ensure-sysext.service... Feb 13 15:32:44.462175 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:32:44.468612 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:32:44.468695 systemd[1]: Reloading... Feb 13 15:32:44.483167 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:32:44.483564 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:32:44.484548 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:32:44.484848 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:32:44.484935 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:32:44.489137 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:32:44.489151 systemd-tmpfiles[1260]: Skipping /boot Feb 13 15:32:44.503190 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:32:44.503315 systemd-tmpfiles[1260]: Skipping /boot Feb 13 15:32:44.519006 zram_generator::config[1287]: No configuration found. Feb 13 15:32:44.623761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:44.672629 systemd[1]: Reloading finished in 203 ms. Feb 13 15:32:44.690170 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:32:44.702462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:44.711755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:32:44.714313 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:32:44.716869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:32:44.721105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:32:44.727963 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:44.733212 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:32:44.736730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:44.736899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:44.740605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:44.744876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:44.750937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:44.752229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:44.754433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:32:44.755932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:44.756965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:44.758045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:44.759201 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Feb 13 15:32:44.759836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:32:44.761892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:44.762071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:44.764282 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:44.764480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:44.774913 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:32:44.776259 augenrules[1359]: No rules Feb 13 15:32:44.777634 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:32:44.777860 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:32:44.782268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:44.787422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:44.794280 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:32:44.796686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:44.799330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:44.803055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:32:44.805782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:44.809200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:44.809620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:44.814370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:32:44.816824 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:32:44.818061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:32:44.819115 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:32:44.821176 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:32:44.821348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:32:44.823927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:44.824112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:44.826743 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:32:44.832091 augenrules[1376]: /sbin/augenrules: No change Feb 13 15:32:44.839078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:32:44.839196 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:32:44.839671 augenrules[1416]: No rules Feb 13 15:32:44.841797 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:32:44.842852 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:32:44.844840 systemd[1]: Finished ensure-sysext.service. Feb 13 15:32:44.846402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:44.847238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:44.849468 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:44.849667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:44.854758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:32:44.863204 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:32:44.864993 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:32:44.873029 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:32:44.894249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1368) Feb 13 15:32:44.904190 systemd-networkd[1400]: lo: Link UP Feb 13 15:32:44.904523 systemd-networkd[1400]: lo: Gained carrier Feb 13 15:32:44.906945 systemd-networkd[1400]: Enumeration completed Feb 13 15:32:44.907138 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:32:44.908350 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:44.908359 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:44.909094 systemd-networkd[1400]: eth0: Link UP Feb 13 15:32:44.909103 systemd-networkd[1400]: eth0: Gained carrier Feb 13 15:32:44.909114 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:44.921145 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:32:44.927226 systemd-resolved[1329]: Positive Trust Anchors: Feb 13 15:32:44.927242 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:32:44.927273 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:32:44.927627 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:32:44.935245 systemd-resolved[1329]: Defaulting to hostname 'linux'. Feb 13 15:32:44.938969 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:32:44.941433 systemd[1]: Reached target network.target - Network. Feb 13 15:32:44.942490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:44.952512 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:32:44.951894 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:32:44.953274 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:32:46.284537 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:32:46.284592 systemd-timesyncd[1425]: Initial clock synchronization to Thu 2025-02-13 15:32:46.284449 UTC. Feb 13 15:32:46.286104 systemd-resolved[1329]: Clock change detected. Flushing caches. Feb 13 15:32:46.286141 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:46.287374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:32:46.293138 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:32:46.301286 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:32:46.311108 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:32:46.312272 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:32:46.313383 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:32:46.324170 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:32:46.329418 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:32:46.386659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:46.403116 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:32:46.416258 kernel: kvm_amd: TSC scaling supported Feb 13 15:32:46.416289 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:32:46.416302 kernel: kvm_amd: Nested Paging enabled Feb 13 15:32:46.416314 kernel: kvm_amd: LBR virtualization supported Feb 13 15:32:46.417433 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:32:46.417460 kernel: kvm_amd: Virtual GIF supported Feb 13 15:32:46.439103 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:32:46.481499 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:32:46.483167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:46.495260 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:32:46.504834 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:32:46.536488 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:32:46.538103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:46.539239 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:32:46.540396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:32:46.541653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:32:46.543110 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:32:46.544300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:32:46.545537 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:32:46.546776 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:32:46.546800 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:32:46.547700 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:32:46.549373 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:32:46.552282 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:32:46.558623 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:32:46.560984 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:32:46.562549 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:32:46.563711 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:32:46.564709 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:32:46.565671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:32:46.565697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:32:46.566663 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:32:46.568698 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:32:46.572083 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:32:46.572399 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:32:46.577293 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:32:46.578438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:32:46.579928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:32:46.582673 jq[1459]: false Feb 13 15:32:46.585626 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:32:46.589721 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:32:46.593733 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:32:46.601487 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:32:46.603050 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:32:46.603484 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:32:46.604964 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:32:46.608002 extend-filesystems[1460]: Found loop3 Feb 13 15:32:46.608951 extend-filesystems[1460]: Found loop4 Feb 13 15:32:46.608951 extend-filesystems[1460]: Found loop5 Feb 13 15:32:46.608951 extend-filesystems[1460]: Found sr0 Feb 13 15:32:46.608951 extend-filesystems[1460]: Found vda Feb 13 15:32:46.608951 extend-filesystems[1460]: Found vda1 Feb 13 15:32:46.611834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda2 Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda3 Feb 13 15:32:46.612643 extend-filesystems[1460]: Found usr Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda4 Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda6 Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda7 Feb 13 15:32:46.612643 extend-filesystems[1460]: Found vda9 Feb 13 15:32:46.612643 extend-filesystems[1460]: Checking size of /dev/vda9 Feb 13 15:32:46.615999 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:32:46.622671 dbus-daemon[1458]: [system] SELinux support is enabled Feb 13 15:32:46.626702 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:32:46.631160 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:32:46.631378 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:32:46.631712 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:32:46.631922 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:32:46.634366 jq[1475]: true Feb 13 15:32:46.637155 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:32:46.637371 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:32:46.637510 extend-filesystems[1460]: Resized partition /dev/vda9 Feb 13 15:32:46.640136 update_engine[1473]: I20250213 15:32:46.638705 1473 main.cc:92] Flatcar Update Engine starting Feb 13 15:32:46.646124 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:32:46.647586 update_engine[1473]: I20250213 15:32:46.646207 1473 update_check_scheduler.cc:74] Next update check in 5m30s Feb 13 15:32:46.651923 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:32:46.655860 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:32:46.655940 jq[1484]: true Feb 13 15:32:46.653339 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:32:46.653368 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:32:46.655464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:32:46.655481 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:32:46.659239 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:32:46.668116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Feb 13 15:32:46.668163 tar[1482]: linux-amd64/helm Feb 13 15:32:46.665937 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:32:46.688109 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:32:46.719746 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:32:46.719746 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:32:46.719746 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:32:46.719706 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:32:46.736518 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Feb 13 15:32:46.719727 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:32:46.721743 systemd-logind[1469]: New seat seat0. Feb 13 15:32:46.724438 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:32:46.724676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:32:46.730060 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:32:46.742267 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:32:46.744258 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:32:46.746496 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:32:46.747612 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:32:46.832823 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:32:46.849040 containerd[1485]: time="2025-02-13T15:32:46.848959251Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:32:46.856386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:32:46.864810 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:32:46.871851 containerd[1485]: time="2025-02-13T15:32:46.871780769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.873584 containerd[1485]: time="2025-02-13T15:32:46.873523628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:46.873584 containerd[1485]: time="2025-02-13T15:32:46.873560838Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:32:46.873651 containerd[1485]: time="2025-02-13T15:32:46.873589291Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:32:46.873790 containerd[1485]: time="2025-02-13T15:32:46.873764760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:32:46.873790 containerd[1485]: time="2025-02-13T15:32:46.873785779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.873877 containerd[1485]: time="2025-02-13T15:32:46.873852555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:46.873877 containerd[1485]: time="2025-02-13T15:32:46.873869627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874091 containerd[1485]: time="2025-02-13T15:32:46.874050255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874091 containerd[1485]: time="2025-02-13T15:32:46.874082466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874131 containerd[1485]: time="2025-02-13T15:32:46.874097254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874131 containerd[1485]: time="2025-02-13T15:32:46.874108024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874215 containerd[1485]: time="2025-02-13T15:32:46.874197772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874547 containerd[1485]: time="2025-02-13T15:32:46.874517341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874670 containerd[1485]: time="2025-02-13T15:32:46.874650481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:46.874670 containerd[1485]: time="2025-02-13T15:32:46.874667503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:32:46.874779 containerd[1485]: time="2025-02-13T15:32:46.874762762Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:32:46.874833 containerd[1485]: time="2025-02-13T15:32:46.874819638Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:32:46.881261 containerd[1485]: time="2025-02-13T15:32:46.881144700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:32:46.881261 containerd[1485]: time="2025-02-13T15:32:46.881214311Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:32:46.881261 containerd[1485]: time="2025-02-13T15:32:46.881232795Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:32:46.881261 containerd[1485]: time="2025-02-13T15:32:46.881249897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:32:46.881261 containerd[1485]: time="2025-02-13T15:32:46.881274614Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881447197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881731300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881850955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881867285Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881880911Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881894576Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881908182Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881920234Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881933579Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881947796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881960249Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881971751Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.881967 containerd[1485]: time="2025-02-13T15:32:46.881983944Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882003961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882017066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882035651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882060658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882090684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882103739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882114669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882128755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882142511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882165795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882176976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882188558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882199789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882213334Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:32:46.882282 containerd[1485]: time="2025-02-13T15:32:46.882233282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882251025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882262296Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882317499Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882356803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882366842Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882379636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882388503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882404232Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882414491Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:32:46.882601 containerd[1485]: time="2025-02-13T15:32:46.882425031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:32:46.882787 containerd[1485]: time="2025-02-13T15:32:46.882683736Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:32:46.882787 containerd[1485]: time="2025-02-13T15:32:46.882725535Z" level=info msg="Connect containerd service" Feb 13 15:32:46.882787 containerd[1485]: time="2025-02-13T15:32:46.882764608Z" level=info msg="using legacy CRI server" Feb 13 15:32:46.882787 containerd[1485]: time="2025-02-13T15:32:46.882771280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:32:46.882963 containerd[1485]: time="2025-02-13T15:32:46.882893069Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:32:46.883018 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:32:46.883290 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:32:46.883496 containerd[1485]: time="2025-02-13T15:32:46.883466043Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:32:46.883792 containerd[1485]: time="2025-02-13T15:32:46.883640410Z" level=info msg="Start subscribing containerd event" Feb 13 15:32:46.883792 containerd[1485]: time="2025-02-13T15:32:46.883721813Z" level=info msg="Start recovering state" Feb 13 15:32:46.883849 containerd[1485]: time="2025-02-13T15:32:46.883819857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:32:46.884052 containerd[1485]: time="2025-02-13T15:32:46.884036423Z" level=info msg="Start event monitor" Feb 13 15:32:46.884743 containerd[1485]: time="2025-02-13T15:32:46.884206011Z" level=info msg="Start snapshots syncer" Feb 13 15:32:46.884743 containerd[1485]: time="2025-02-13T15:32:46.884221761Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:32:46.884743 containerd[1485]: time="2025-02-13T15:32:46.884232260Z" level=info msg="Start streaming server" Feb 13 15:32:46.884743 containerd[1485]: time="2025-02-13T15:32:46.884179772Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:32:46.884743 containerd[1485]: time="2025-02-13T15:32:46.884396759Z" level=info msg="containerd successfully booted in 0.036507s" Feb 13 15:32:46.884858 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:32:46.909322 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:32:46.915896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:32:46.918602 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:34342.service - OpenSSH per-connection server daemon (10.0.0.1:34342). Feb 13 15:32:46.922643 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:32:46.939490 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:32:46.942121 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:32:46.943552 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:32:46.996672 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 34342 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:46.999184 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:47.008640 systemd-logind[1469]: New session 1 of user core. Feb 13 15:32:47.009908 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:32:47.021268 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:32:47.035406 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:32:47.038694 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:32:47.047852 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:32:47.080912 tar[1482]: linux-amd64/LICENSE Feb 13 15:32:47.080991 tar[1482]: linux-amd64/README.md Feb 13 15:32:47.093354 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:32:47.165227 systemd[1551]: Queued start job for default target default.target. Feb 13 15:32:47.174460 systemd[1551]: Created slice app.slice - User Application Slice. Feb 13 15:32:47.174486 systemd[1551]: Reached target paths.target - Paths. Feb 13 15:32:47.174499 systemd[1551]: Reached target timers.target - Timers. Feb 13 15:32:47.176103 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:32:47.187210 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:32:47.187368 systemd[1551]: Reached target sockets.target - Sockets. Feb 13 15:32:47.187389 systemd[1551]: Reached target basic.target - Basic System. Feb 13 15:32:47.187435 systemd[1551]: Reached target default.target - Main User Target. Feb 13 15:32:47.187475 systemd[1551]: Startup finished in 132ms. Feb 13 15:32:47.187737 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:32:47.190440 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:32:47.251308 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:43494.service - OpenSSH per-connection server daemon (10.0.0.1:43494). Feb 13 15:32:47.301015 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 43494 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:47.302364 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:47.305951 systemd-logind[1469]: New session 2 of user core. Feb 13 15:32:47.322182 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:32:47.376568 sshd[1567]: Connection closed by 10.0.0.1 port 43494 Feb 13 15:32:47.376956 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:47.387424 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:43494.service: Deactivated successfully. Feb 13 15:32:47.388937 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:32:47.390123 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:32:47.391266 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:43508.service - OpenSSH per-connection server daemon (10.0.0.1:43508). Feb 13 15:32:47.393754 systemd-logind[1469]: Removed session 2. Feb 13 15:32:47.434355 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 43508 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:47.435668 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:47.439117 systemd-logind[1469]: New session 3 of user core. Feb 13 15:32:47.448162 systemd-networkd[1400]: eth0: Gained IPv6LL Feb 13 15:32:47.459210 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:32:47.470344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:32:47.473330 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:32:47.490307 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:32:47.492666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:47.495179 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:32:47.514059 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:32:47.514386 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:32:47.516501 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:32:47.517124 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:32:47.553921 sshd[1580]: Connection closed by 10.0.0.1 port 43508 Feb 13 15:32:47.554249 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:47.558418 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:43508.service: Deactivated successfully. Feb 13 15:32:47.560243 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:32:47.560833 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:32:47.561772 systemd-logind[1469]: Removed session 3. Feb 13 15:32:48.458881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:48.460908 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:32:48.466106 systemd[1]: Startup finished in 709ms (kernel) + 6.423s (initrd) + 4.055s (userspace) = 11.187s. Feb 13 15:32:48.476045 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:32:49.068093 kubelet[1600]: E0213 15:32:49.067908 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:32:49.072693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:32:49.072917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:32:49.073336 systemd[1]: kubelet.service: Consumed 1.442s CPU time. Feb 13 15:32:57.566231 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:57128.service - OpenSSH per-connection server daemon (10.0.0.1:57128). Feb 13 15:32:57.611714 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 57128 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:57.613629 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:57.618057 systemd-logind[1469]: New session 4 of user core. Feb 13 15:32:57.627191 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:32:57.683578 sshd[1616]: Connection closed by 10.0.0.1 port 57128 Feb 13 15:32:57.684041 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:57.703355 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:57128.service: Deactivated successfully. Feb 13 15:32:57.705451 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:32:57.707314 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:32:57.708533 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:57134.service - OpenSSH per-connection server daemon (10.0.0.1:57134). Feb 13 15:32:57.709263 systemd-logind[1469]: Removed session 4. Feb 13 15:32:57.752692 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 57134 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:57.754484 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:57.758409 systemd-logind[1469]: New session 5 of user core. Feb 13 15:32:57.766292 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:32:57.816671 sshd[1623]: Connection closed by 10.0.0.1 port 57134 Feb 13 15:32:57.816900 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:57.827747 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:57134.service: Deactivated successfully. Feb 13 15:32:57.829503 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:32:57.831700 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:32:57.849567 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). Feb 13 15:32:57.850734 systemd-logind[1469]: Removed session 5. Feb 13 15:32:57.891330 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:57.892847 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:57.896953 systemd-logind[1469]: New session 6 of user core. Feb 13 15:32:57.916212 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:32:57.969964 sshd[1630]: Connection closed by 10.0.0.1 port 57144 Feb 13 15:32:57.970315 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:57.983981 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:57144.service: Deactivated successfully. Feb 13 15:32:57.985923 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:32:57.987888 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:32:57.994347 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:57148.service - OpenSSH per-connection server daemon (10.0.0.1:57148). Feb 13 15:32:57.995261 systemd-logind[1469]: Removed session 6. Feb 13 15:32:58.033527 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:58.035034 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:58.039378 systemd-logind[1469]: New session 7 of user core. Feb 13 15:32:58.050216 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:32:58.108046 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:32:58.108402 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:32:58.125344 sudo[1638]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:58.126681 sshd[1637]: Connection closed by 10.0.0.1 port 57148 Feb 13 15:32:58.127133 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.147937 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:57148.service: Deactivated successfully. Feb 13 15:32:58.149661 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:32:58.151097 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:32:58.160382 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:57158.service - OpenSSH per-connection server daemon (10.0.0.1:57158). Feb 13 15:32:58.161406 systemd-logind[1469]: Removed session 7. Feb 13 15:32:58.200795 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 57158 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:58.202473 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:58.206543 systemd-logind[1469]: New session 8 of user core. Feb 13 15:32:58.215218 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:32:58.269922 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:32:58.270299 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:32:58.273989 sudo[1647]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:58.279863 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:32:58.280202 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:32:58.298428 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:32:58.338824 augenrules[1669]: No rules Feb 13 15:32:58.340103 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:32:58.340392 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:32:58.341876 sudo[1646]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:58.343657 sshd[1645]: Connection closed by 10.0.0.1 port 57158 Feb 13 15:32:58.344211 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.362405 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:57158.service: Deactivated successfully. Feb 13 15:32:58.364580 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:32:58.365889 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:32:58.373381 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:57168.service - OpenSSH per-connection server daemon (10.0.0.1:57168). Feb 13 15:32:58.374245 systemd-logind[1469]: Removed session 8. Feb 13 15:32:58.412919 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 57168 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:32:58.414318 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:58.418190 systemd-logind[1469]: New session 9 of user core. Feb 13 15:32:58.432344 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:32:58.484493 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:32:58.484810 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:32:58.918350 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:32:58.918477 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:32:59.323161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:32:59.332458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:59.584409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:59.619470 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:32:59.623517 dockerd[1700]: time="2025-02-13T15:32:59.623370281Z" level=info msg="Starting up" Feb 13 15:32:59.714078 kubelet[1718]: E0213 15:32:59.713995 1718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:32:59.722712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:32:59.722985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:00.088886 systemd[1]: var-lib-docker-metacopy\x2dcheck3644613863-merged.mount: Deactivated successfully. Feb 13 15:33:00.120755 dockerd[1700]: time="2025-02-13T15:33:00.120687378Z" level=info msg="Loading containers: start." Feb 13 15:33:00.318096 kernel: Initializing XFRM netlink socket Feb 13 15:33:00.407636 systemd-networkd[1400]: docker0: Link UP Feb 13 15:33:00.458686 dockerd[1700]: time="2025-02-13T15:33:00.458625363Z" level=info msg="Loading containers: done." Feb 13 15:33:00.472773 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2236103556-merged.mount: Deactivated successfully. Feb 13 15:33:00.503491 dockerd[1700]: time="2025-02-13T15:33:00.503429346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:33:00.503664 dockerd[1700]: time="2025-02-13T15:33:00.503568287Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:33:00.503816 dockerd[1700]: time="2025-02-13T15:33:00.503784532Z" level=info msg="Daemon has completed initialization" Feb 13 15:33:00.552740 dockerd[1700]: time="2025-02-13T15:33:00.552671884Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:33:00.552941 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:33:01.370954 containerd[1485]: time="2025-02-13T15:33:01.370911665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:33:02.011934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679529910.mount: Deactivated successfully. Feb 13 15:33:03.154558 containerd[1485]: time="2025-02-13T15:33:03.154512343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.155233 containerd[1485]: time="2025-02-13T15:33:03.155197348Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:33:03.156366 containerd[1485]: time="2025-02-13T15:33:03.156339590Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.158811 containerd[1485]: time="2025-02-13T15:33:03.158784425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.159863 containerd[1485]: time="2025-02-13T15:33:03.159834865Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 1.788881411s" Feb 13 15:33:03.159908 containerd[1485]: time="2025-02-13T15:33:03.159868849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:33:03.183047 containerd[1485]: time="2025-02-13T15:33:03.182949884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:33:04.923529 containerd[1485]: time="2025-02-13T15:33:04.923469720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:04.924394 containerd[1485]: time="2025-02-13T15:33:04.924315175Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:33:04.925772 containerd[1485]: time="2025-02-13T15:33:04.925731371Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:04.928852 containerd[1485]: time="2025-02-13T15:33:04.928809844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:04.931354 containerd[1485]: time="2025-02-13T15:33:04.931304463Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.748320676s" Feb 13 15:33:04.931354 containerd[1485]: time="2025-02-13T15:33:04.931348034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:33:04.955645 containerd[1485]: time="2025-02-13T15:33:04.955595898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:33:06.780080 containerd[1485]: time="2025-02-13T15:33:06.779987994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:06.781085 containerd[1485]: time="2025-02-13T15:33:06.781011233Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:33:06.782470 containerd[1485]: time="2025-02-13T15:33:06.782428321Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:06.785792 containerd[1485]: time="2025-02-13T15:33:06.785750280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:06.787082 containerd[1485]: time="2025-02-13T15:33:06.786989705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.831340067s" Feb 13 15:33:06.787082 containerd[1485]: time="2025-02-13T15:33:06.787076528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:33:06.812536 containerd[1485]: time="2025-02-13T15:33:06.812488354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:33:07.863910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752412914.mount: Deactivated successfully. Feb 13 15:33:08.094937 containerd[1485]: time="2025-02-13T15:33:08.094885256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:08.095626 containerd[1485]: time="2025-02-13T15:33:08.095563207Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:33:08.096638 containerd[1485]: time="2025-02-13T15:33:08.096606844Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:08.099129 containerd[1485]: time="2025-02-13T15:33:08.099085633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:08.099809 containerd[1485]: time="2025-02-13T15:33:08.099779013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.287250174s" Feb 13 15:33:08.099845 containerd[1485]: time="2025-02-13T15:33:08.099808609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:33:08.122010 containerd[1485]: time="2025-02-13T15:33:08.121915678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:33:08.661764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032308680.mount: Deactivated successfully. Feb 13 15:33:09.762460 containerd[1485]: time="2025-02-13T15:33:09.762399661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:09.763162 containerd[1485]: time="2025-02-13T15:33:09.763090356Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:33:09.764235 containerd[1485]: time="2025-02-13T15:33:09.764205507Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:09.766931 containerd[1485]: time="2025-02-13T15:33:09.766892166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:09.768036 containerd[1485]: time="2025-02-13T15:33:09.768003159Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.646045603s" Feb 13 15:33:09.768088 containerd[1485]: time="2025-02-13T15:33:09.768035500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:33:09.789492 containerd[1485]: time="2025-02-13T15:33:09.789457884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:33:09.973148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:33:09.987261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:10.126790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:10.131754 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:10.230629 kubelet[2074]: E0213 15:33:10.230559 2074 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:10.235353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:10.235556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:10.510865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027888257.mount: Deactivated successfully. Feb 13 15:33:10.516863 containerd[1485]: time="2025-02-13T15:33:10.516813987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.517660 containerd[1485]: time="2025-02-13T15:33:10.517619948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:33:10.518709 containerd[1485]: time="2025-02-13T15:33:10.518669627Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.521139 containerd[1485]: time="2025-02-13T15:33:10.521102048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.522040 containerd[1485]: time="2025-02-13T15:33:10.522001054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 732.511721ms" Feb 13 15:33:10.522040 containerd[1485]: time="2025-02-13T15:33:10.522032954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:33:10.542445 containerd[1485]: time="2025-02-13T15:33:10.542405510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:33:11.239892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119529998.mount: Deactivated successfully. Feb 13 15:33:12.876342 containerd[1485]: time="2025-02-13T15:33:12.876268927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:12.877090 containerd[1485]: time="2025-02-13T15:33:12.877036166Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:33:12.878385 containerd[1485]: time="2025-02-13T15:33:12.878354648Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:12.881555 containerd[1485]: time="2025-02-13T15:33:12.881492743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:12.882627 containerd[1485]: time="2025-02-13T15:33:12.882595101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.340148063s" Feb 13 15:33:12.882668 containerd[1485]: time="2025-02-13T15:33:12.882629786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:33:15.388391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:15.406320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:15.422728 systemd[1]: Reloading requested from client PID 2224 ('systemctl') (unit session-9.scope)... Feb 13 15:33:15.422749 systemd[1]: Reloading... Feb 13 15:33:15.513090 zram_generator::config[2266]: No configuration found. Feb 13 15:33:16.032018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:16.108652 systemd[1]: Reloading finished in 685 ms. Feb 13 15:33:16.160152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:16.165198 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:16.165435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:16.167034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:16.307374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:16.312952 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:16.353869 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:16.353869 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:16.353869 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:16.354253 kubelet[2313]: I0213 15:33:16.353907 2313 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:16.664429 kubelet[2313]: I0213 15:33:16.664323 2313 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:33:16.664429 kubelet[2313]: I0213 15:33:16.664353 2313 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:16.664630 kubelet[2313]: I0213 15:33:16.664610 2313 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:33:16.683788 kubelet[2313]: I0213 15:33:16.683751 2313 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:16.685001 kubelet[2313]: E0213 15:33:16.684976 2313 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:16.693732 kubelet[2313]: I0213 15:33:16.693712 2313 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:16.693965 kubelet[2313]: I0213 15:33:16.693945 2313 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:16.694125 kubelet[2313]: I0213 15:33:16.694105 2313 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:33:16.694219 kubelet[2313]: I0213 15:33:16.694130 2313 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:16.694219 kubelet[2313]: I0213 15:33:16.694139 2313 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:33:16.694276 kubelet[2313]: I0213 15:33:16.694252 2313 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:16.694348 kubelet[2313]: I0213 15:33:16.694336 2313 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:33:16.694382 kubelet[2313]: I0213 15:33:16.694350 2313 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:16.694382 kubelet[2313]: I0213 15:33:16.694374 2313 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:33:16.694440 kubelet[2313]: I0213 15:33:16.694392 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:16.695409 kubelet[2313]: W0213 15:33:16.695251 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:16.695486 kubelet[2313]: E0213 15:33:16.695437 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:16.695486 kubelet[2313]: I0213 15:33:16.695465 2313 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:16.696345 kubelet[2313]: W0213 15:33:16.696252 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:16.696345 kubelet[2313]: E0213 15:33:16.696294 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.078736 kubelet[2313]: I0213 15:33:17.078604 2313 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:17.078736 kubelet[2313]: W0213 15:33:17.078730 2313 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:33:17.079576 kubelet[2313]: I0213 15:33:17.079551 2313 server.go:1256] "Started kubelet" Feb 13 15:33:17.079648 kubelet[2313]: I0213 15:33:17.079631 2313 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:17.079998 kubelet[2313]: I0213 15:33:17.079957 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:17.080293 kubelet[2313]: I0213 15:33:17.080265 2313 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:17.080579 kubelet[2313]: I0213 15:33:17.080555 2313 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:33:17.081062 kubelet[2313]: I0213 15:33:17.081035 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:17.084821 kubelet[2313]: E0213 15:33:17.084703 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:17.084821 kubelet[2313]: I0213 15:33:17.084743 2313 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:33:17.085413 kubelet[2313]: I0213 15:33:17.085299 2313 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:33:17.085660 kubelet[2313]: I0213 15:33:17.085629 2313 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:33:17.086901 kubelet[2313]: W0213 15:33:17.086809 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.086901 kubelet[2313]: E0213 15:33:17.086894 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.088093 kubelet[2313]: E0213 15:33:17.087656 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Feb 13 15:33:17.088093 kubelet[2313]: I0213 15:33:17.087698 2313 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:17.088093 kubelet[2313]: E0213 15:33:17.087746 2313 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:17.088093 kubelet[2313]: I0213 15:33:17.087811 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:17.089190 kubelet[2313]: I0213 15:33:17.089125 2313 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:17.089190 kubelet[2313]: E0213 15:33:17.089135 2313 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce60fea4ab52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:33:17.079530322 +0000 UTC m=+0.762601971,LastTimestamp:2025-02-13 15:33:17.079530322 +0000 UTC m=+0.762601971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:33:17.101693 kubelet[2313]: I0213 15:33:17.101644 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:17.103476 kubelet[2313]: I0213 15:33:17.103440 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:17.103476 kubelet[2313]: I0213 15:33:17.103473 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:33:17.103685 kubelet[2313]: I0213 15:33:17.103493 2313 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:33:17.103685 kubelet[2313]: E0213 15:33:17.103555 2313 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:17.104061 kubelet[2313]: W0213 15:33:17.104023 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.104114 kubelet[2313]: E0213 15:33:17.104088 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.104426 kubelet[2313]: I0213 15:33:17.104406 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:33:17.104426 kubelet[2313]: I0213 15:33:17.104422 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:17.104512 kubelet[2313]: I0213 15:33:17.104436 2313 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:17.107340 kubelet[2313]: I0213 15:33:17.107311 2313 policy_none.go:49] "None policy: Start" Feb 13 15:33:17.107792 kubelet[2313]: I0213 15:33:17.107748 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:33:17.107866 kubelet[2313]: I0213 15:33:17.107794 2313 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:17.114122 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:33:17.128106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:33:17.131017 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:33:17.137893 kubelet[2313]: I0213 15:33:17.137862 2313 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:17.138259 kubelet[2313]: I0213 15:33:17.138192 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:17.139024 kubelet[2313]: E0213 15:33:17.138999 2313 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:33:17.186597 kubelet[2313]: I0213 15:33:17.186578 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:17.186947 kubelet[2313]: E0213 15:33:17.186931 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:17.204232 kubelet[2313]: I0213 15:33:17.204204 2313 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:33:17.204882 kubelet[2313]: I0213 15:33:17.204868 2313 topology_manager.go:215] "Topology Admit Handler" podUID="694667830edcd09fcae43ba1d8e72ee5" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:33:17.205476 kubelet[2313]: I0213 15:33:17.205448 2313 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:33:17.210864 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:33:17.236934 systemd[1]: Created slice kubepods-burstable-pod694667830edcd09fcae43ba1d8e72ee5.slice - libcontainer container kubepods-burstable-pod694667830edcd09fcae43ba1d8e72ee5.slice. Feb 13 15:33:17.248977 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:33:17.289025 kubelet[2313]: E0213 15:33:17.288990 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Feb 13 15:33:17.387370 kubelet[2313]: I0213 15:33:17.387283 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:33:17.387370 kubelet[2313]: I0213 15:33:17.387314 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:17.387370 kubelet[2313]: I0213 15:33:17.387332 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:17.387370 kubelet[2313]: I0213 15:33:17.387349 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:17.388095 kubelet[2313]: I0213 15:33:17.387389 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:17.388095 kubelet[2313]: I0213 15:33:17.387425 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:17.388095 kubelet[2313]: I0213 15:33:17.387445 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:17.388095 kubelet[2313]: I0213 15:33:17.387463 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:17.388095 kubelet[2313]: I0213 15:33:17.387482 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:17.388516 kubelet[2313]: I0213 15:33:17.388473 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:17.388810 kubelet[2313]: E0213 15:33:17.388786 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:17.524615 kubelet[2313]: W0213 15:33:17.524566 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.524615 kubelet[2313]: E0213 15:33:17.524616 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.534777 kubelet[2313]: E0213 15:33:17.534734 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:17.535427 containerd[1485]: time="2025-02-13T15:33:17.535382298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:17.547672 kubelet[2313]: E0213 15:33:17.547633 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:17.548154 containerd[1485]: time="2025-02-13T15:33:17.548122109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694667830edcd09fcae43ba1d8e72ee5,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:17.551427 kubelet[2313]: E0213 15:33:17.551392 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:17.551714 containerd[1485]: time="2025-02-13T15:33:17.551692174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:17.690237 kubelet[2313]: E0213 15:33:17.690123 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Feb 13 15:33:17.790924 kubelet[2313]: I0213 15:33:17.790874 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:17.791300 kubelet[2313]: E0213 15:33:17.791281 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:17.805685 kubelet[2313]: W0213 15:33:17.805613 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:17.805685 kubelet[2313]: E0213 15:33:17.805675 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.112324 kubelet[2313]: W0213 15:33:18.112201 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.112324 kubelet[2313]: E0213 15:33:18.112241 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.289948 kubelet[2313]: W0213 15:33:18.289896 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.289948 kubelet[2313]: E0213 15:33:18.289935 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.464007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187652493.mount: Deactivated successfully. Feb 13 15:33:18.475323 containerd[1485]: time="2025-02-13T15:33:18.475275353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:18.479602 containerd[1485]: time="2025-02-13T15:33:18.479550670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:33:18.480507 containerd[1485]: time="2025-02-13T15:33:18.480478480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:18.481545 containerd[1485]: time="2025-02-13T15:33:18.481499525Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:18.482429 containerd[1485]: time="2025-02-13T15:33:18.482393993Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:18.483354 containerd[1485]: time="2025-02-13T15:33:18.483309880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:18.484399 containerd[1485]: time="2025-02-13T15:33:18.484347226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:18.485844 containerd[1485]: time="2025-02-13T15:33:18.485807564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:18.486569 containerd[1485]: time="2025-02-13T15:33:18.486535169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 951.041412ms" Feb 13 15:33:18.489131 containerd[1485]: time="2025-02-13T15:33:18.489101852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 937.334878ms" Feb 13 15:33:18.491454 kubelet[2313]: E0213 15:33:18.491413 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Feb 13 15:33:18.494622 containerd[1485]: time="2025-02-13T15:33:18.494565699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.35847ms" Feb 13 15:33:18.593301 kubelet[2313]: I0213 15:33:18.593267 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:18.593928 kubelet[2313]: E0213 15:33:18.593882 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Feb 13 15:33:18.610592 containerd[1485]: time="2025-02-13T15:33:18.610428373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:18.610592 containerd[1485]: time="2025-02-13T15:33:18.610504385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:18.610592 containerd[1485]: time="2025-02-13T15:33:18.610537217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.611292 containerd[1485]: time="2025-02-13T15:33:18.610520145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:18.611292 containerd[1485]: time="2025-02-13T15:33:18.610674324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:18.611292 containerd[1485]: time="2025-02-13T15:33:18.610793297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.611292 containerd[1485]: time="2025-02-13T15:33:18.610737633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.611292 containerd[1485]: time="2025-02-13T15:33:18.610891822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.617126 containerd[1485]: time="2025-02-13T15:33:18.616806424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:18.617126 containerd[1485]: time="2025-02-13T15:33:18.616867168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:18.617126 containerd[1485]: time="2025-02-13T15:33:18.616884931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.617126 containerd[1485]: time="2025-02-13T15:33:18.616965542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:18.634218 systemd[1]: Started cri-containerd-ecccf184d414994935b560e56530903a9f4a9ccf67316f651da202ed5a9c41ba.scope - libcontainer container ecccf184d414994935b560e56530903a9f4a9ccf67316f651da202ed5a9c41ba. Feb 13 15:33:18.639666 systemd[1]: Started cri-containerd-7b6b64e7d3424cc038e27bfe197478f4d196c020442a4975f8cd18551a6bca8e.scope - libcontainer container 7b6b64e7d3424cc038e27bfe197478f4d196c020442a4975f8cd18551a6bca8e. Feb 13 15:33:18.641708 systemd[1]: Started cri-containerd-fa89c5c24664b3741bdaf7fa6e0c8423d32d133b32d6cae8bbd1f032797be1bc.scope - libcontainer container fa89c5c24664b3741bdaf7fa6e0c8423d32d133b32d6cae8bbd1f032797be1bc. Feb 13 15:33:18.676310 containerd[1485]: time="2025-02-13T15:33:18.676272304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694667830edcd09fcae43ba1d8e72ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecccf184d414994935b560e56530903a9f4a9ccf67316f651da202ed5a9c41ba\"" Feb 13 15:33:18.677922 kubelet[2313]: E0213 15:33:18.677829 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:18.679969 containerd[1485]: time="2025-02-13T15:33:18.679918822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b6b64e7d3424cc038e27bfe197478f4d196c020442a4975f8cd18551a6bca8e\"" Feb 13 15:33:18.681188 kubelet[2313]: E0213 15:33:18.680728 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:18.684185 containerd[1485]: time="2025-02-13T15:33:18.684158844Z" level=info msg="CreateContainer within sandbox \"ecccf184d414994935b560e56530903a9f4a9ccf67316f651da202ed5a9c41ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:33:18.686007 containerd[1485]: time="2025-02-13T15:33:18.685971172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa89c5c24664b3741bdaf7fa6e0c8423d32d133b32d6cae8bbd1f032797be1bc\"" Feb 13 15:33:18.687046 containerd[1485]: time="2025-02-13T15:33:18.686933076Z" level=info msg="CreateContainer within sandbox \"7b6b64e7d3424cc038e27bfe197478f4d196c020442a4975f8cd18551a6bca8e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:33:18.687189 kubelet[2313]: E0213 15:33:18.687149 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:18.689467 containerd[1485]: time="2025-02-13T15:33:18.689442112Z" level=info msg="CreateContainer within sandbox \"fa89c5c24664b3741bdaf7fa6e0c8423d32d133b32d6cae8bbd1f032797be1bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:33:18.719208 containerd[1485]: time="2025-02-13T15:33:18.719087728Z" level=info msg="CreateContainer within sandbox \"7b6b64e7d3424cc038e27bfe197478f4d196c020442a4975f8cd18551a6bca8e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c41c7038a6955064bddfdf07f392369cbaebf6f93366d2035173faaf04df3d9f\"" Feb 13 15:33:18.720112 containerd[1485]: time="2025-02-13T15:33:18.719824790Z" level=info msg="StartContainer for \"c41c7038a6955064bddfdf07f392369cbaebf6f93366d2035173faaf04df3d9f\"" Feb 13 15:33:18.720590 containerd[1485]: time="2025-02-13T15:33:18.720505457Z" level=info msg="CreateContainer within sandbox \"ecccf184d414994935b560e56530903a9f4a9ccf67316f651da202ed5a9c41ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94bff84f44b60755cb9d4f737401fa3420270ce54cd0d14b52aad26bb0146329\"" Feb 13 15:33:18.720937 containerd[1485]: time="2025-02-13T15:33:18.720915055Z" level=info msg="StartContainer for \"94bff84f44b60755cb9d4f737401fa3420270ce54cd0d14b52aad26bb0146329\"" Feb 13 15:33:18.725698 containerd[1485]: time="2025-02-13T15:33:18.725651096Z" level=info msg="CreateContainer within sandbox \"fa89c5c24664b3741bdaf7fa6e0c8423d32d133b32d6cae8bbd1f032797be1bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3fbb5240ed545bf6744166df8c4745cd73b471d8a1c424c8e5e1eac2ba6eb2bb\"" Feb 13 15:33:18.726157 containerd[1485]: time="2025-02-13T15:33:18.726139442Z" level=info msg="StartContainer for \"3fbb5240ed545bf6744166df8c4745cd73b471d8a1c424c8e5e1eac2ba6eb2bb\"" Feb 13 15:33:18.740910 kubelet[2313]: E0213 15:33:18.740781 2313 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.112:6443: connect: connection refused Feb 13 15:33:18.752241 systemd[1]: Started cri-containerd-c41c7038a6955064bddfdf07f392369cbaebf6f93366d2035173faaf04df3d9f.scope - libcontainer container c41c7038a6955064bddfdf07f392369cbaebf6f93366d2035173faaf04df3d9f. Feb 13 15:33:18.763368 systemd[1]: Started cri-containerd-3fbb5240ed545bf6744166df8c4745cd73b471d8a1c424c8e5e1eac2ba6eb2bb.scope - libcontainer container 3fbb5240ed545bf6744166df8c4745cd73b471d8a1c424c8e5e1eac2ba6eb2bb. Feb 13 15:33:18.765477 systemd[1]: Started cri-containerd-94bff84f44b60755cb9d4f737401fa3420270ce54cd0d14b52aad26bb0146329.scope - libcontainer container 94bff84f44b60755cb9d4f737401fa3420270ce54cd0d14b52aad26bb0146329. Feb 13 15:33:18.800881 containerd[1485]: time="2025-02-13T15:33:18.800829723Z" level=info msg="StartContainer for \"c41c7038a6955064bddfdf07f392369cbaebf6f93366d2035173faaf04df3d9f\" returns successfully" Feb 13 15:33:18.949408 containerd[1485]: time="2025-02-13T15:33:18.949343059Z" level=info msg="StartContainer for \"94bff84f44b60755cb9d4f737401fa3420270ce54cd0d14b52aad26bb0146329\" returns successfully" Feb 13 15:33:18.949408 containerd[1485]: time="2025-02-13T15:33:18.949388013Z" level=info msg="StartContainer for \"3fbb5240ed545bf6744166df8c4745cd73b471d8a1c424c8e5e1eac2ba6eb2bb\" returns successfully" Feb 13 15:33:19.110754 kubelet[2313]: E0213 15:33:19.110633 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:19.114045 kubelet[2313]: E0213 15:33:19.113841 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:19.114254 kubelet[2313]: E0213 15:33:19.114121 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:19.989118 kubelet[2313]: E0213 15:33:19.989083 2313 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:33:20.095620 kubelet[2313]: E0213 15:33:20.095576 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:33:20.116755 kubelet[2313]: E0213 15:33:20.116688 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:20.195251 kubelet[2313]: I0213 15:33:20.195220 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:20.223985 kubelet[2313]: I0213 15:33:20.223947 2313 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:33:20.698290 kubelet[2313]: I0213 15:33:20.698229 2313 apiserver.go:52] "Watching apiserver" Feb 13 15:33:20.786210 kubelet[2313]: I0213 15:33:20.786172 2313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:33:22.828713 kubelet[2313]: E0213 15:33:22.828667 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:23.119887 kubelet[2313]: E0213 15:33:23.119768 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:23.172010 systemd[1]: Reloading requested from client PID 2587 ('systemctl') (unit session-9.scope)... Feb 13 15:33:23.172037 systemd[1]: Reloading... Feb 13 15:33:23.263134 zram_generator::config[2634]: No configuration found. Feb 13 15:33:23.875973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:23.972281 systemd[1]: Reloading finished in 799 ms. Feb 13 15:33:24.017534 kubelet[2313]: I0213 15:33:24.017484 2313 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:24.017704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:24.031601 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:24.031898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:24.042463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:24.190289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:24.194868 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:24.238155 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:24.238155 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:24.238155 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:24.238540 kubelet[2671]: I0213 15:33:24.238217 2671 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:24.242635 kubelet[2671]: I0213 15:33:24.242612 2671 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:33:24.242635 kubelet[2671]: I0213 15:33:24.242635 2671 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:24.242883 kubelet[2671]: I0213 15:33:24.242866 2671 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:33:24.244210 kubelet[2671]: I0213 15:33:24.244190 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:33:24.245895 kubelet[2671]: I0213 15:33:24.245745 2671 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:24.259401 kubelet[2671]: I0213 15:33:24.259231 2671 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:24.259544 kubelet[2671]: I0213 15:33:24.259456 2671 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259611 2671 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259641 2671 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259650 2671 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259678 2671 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259817 2671 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259832 2671 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:24.261471 kubelet[2671]: I0213 15:33:24.259857 2671 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:33:24.261776 kubelet[2671]: I0213 15:33:24.259872 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:24.261776 kubelet[2671]: I0213 15:33:24.260680 2671 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:24.261776 kubelet[2671]: I0213 15:33:24.260960 2671 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:24.261776 kubelet[2671]: I0213 15:33:24.261506 2671 server.go:1256] "Started kubelet" Feb 13 15:33:24.261873 kubelet[2671]: I0213 15:33:24.261839 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:24.262575 kubelet[2671]: I0213 15:33:24.261916 2671 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:24.262575 kubelet[2671]: I0213 15:33:24.262172 2671 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:24.265093 kubelet[2671]: I0213 15:33:24.262979 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:24.266160 kubelet[2671]: I0213 15:33:24.266123 2671 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:33:24.270923 kubelet[2671]: I0213 15:33:24.270888 2671 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:33:24.271032 kubelet[2671]: I0213 15:33:24.270980 2671 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:33:24.271147 kubelet[2671]: I0213 15:33:24.271122 2671 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:33:24.273662 kubelet[2671]: I0213 15:33:24.273635 2671 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:24.273765 kubelet[2671]: I0213 15:33:24.273725 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:24.275263 kubelet[2671]: E0213 15:33:24.275237 2671 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:24.275369 kubelet[2671]: I0213 15:33:24.275340 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:24.275736 kubelet[2671]: I0213 15:33:24.275710 2671 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:24.276728 kubelet[2671]: I0213 15:33:24.276703 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:24.276728 kubelet[2671]: I0213 15:33:24.276730 2671 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:33:24.276809 kubelet[2671]: I0213 15:33:24.276752 2671 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:33:24.276840 kubelet[2671]: E0213 15:33:24.276819 2671 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:24.310900 kubelet[2671]: I0213 15:33:24.310821 2671 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:33:24.311347 kubelet[2671]: I0213 15:33:24.311330 2671 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:24.311392 kubelet[2671]: I0213 15:33:24.311358 2671 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:24.311558 kubelet[2671]: I0213 15:33:24.311534 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:33:24.311612 kubelet[2671]: I0213 15:33:24.311565 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:33:24.311612 kubelet[2671]: I0213 15:33:24.311575 2671 policy_none.go:49] "None policy: Start" Feb 13 15:33:24.312589 kubelet[2671]: I0213 15:33:24.312547 2671 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:33:24.312589 kubelet[2671]: I0213 15:33:24.312575 2671 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:24.312741 kubelet[2671]: I0213 15:33:24.312731 2671 state_mem.go:75] "Updated machine memory state" Feb 13 15:33:24.318097 kubelet[2671]: I0213 15:33:24.317944 2671 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:24.318281 kubelet[2671]: I0213 15:33:24.318227 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:24.354431 sudo[2704]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:33:24.354798 sudo[2704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:33:24.376615 kubelet[2671]: I0213 15:33:24.376559 2671 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:24.377400 kubelet[2671]: I0213 15:33:24.376869 2671 topology_manager.go:215] "Topology Admit Handler" podUID="694667830edcd09fcae43ba1d8e72ee5" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:33:24.377400 kubelet[2671]: I0213 15:33:24.376927 2671 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:33:24.377400 kubelet[2671]: I0213 15:33:24.376955 2671 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:33:24.385988 kubelet[2671]: E0213 15:33:24.385420 2671 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.385988 kubelet[2671]: I0213 15:33:24.385458 2671 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:33:24.385988 kubelet[2671]: I0213 15:33:24.385516 2671 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:33:24.471926 kubelet[2671]: I0213 15:33:24.471819 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.471926 kubelet[2671]: I0213 15:33:24.471857 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.471926 kubelet[2671]: I0213 15:33:24.471878 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.471926 kubelet[2671]: I0213 15:33:24.471920 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.472185 kubelet[2671]: I0213 15:33:24.471938 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:33:24.472185 kubelet[2671]: I0213 15:33:24.471955 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:24.472185 kubelet[2671]: I0213 15:33:24.471971 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:24.472185 kubelet[2671]: I0213 15:33:24.471991 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694667830edcd09fcae43ba1d8e72ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694667830edcd09fcae43ba1d8e72ee5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:24.472185 kubelet[2671]: I0213 15:33:24.472008 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:24.688516 kubelet[2671]: E0213 15:33:24.686991 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:24.688516 kubelet[2671]: E0213 15:33:24.688453 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:24.690693 kubelet[2671]: E0213 15:33:24.690432 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:24.829911 sudo[2704]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:25.261131 kubelet[2671]: I0213 15:33:25.261087 2671 apiserver.go:52] "Watching apiserver" Feb 13 15:33:25.271700 kubelet[2671]: I0213 15:33:25.271654 2671 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:33:25.294303 kubelet[2671]: E0213 15:33:25.294278 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:25.470767 kubelet[2671]: I0213 15:33:25.470723 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.470681151 podStartE2EDuration="3.470681151s" podCreationTimestamp="2025-02-13 15:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:25.470480347 +0000 UTC m=+1.271592442" watchObservedRunningTime="2025-02-13 15:33:25.470681151 +0000 UTC m=+1.271793246" Feb 13 15:33:25.483971 kubelet[2671]: E0213 15:33:25.483933 2671 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:33:25.484506 kubelet[2671]: E0213 15:33:25.484467 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:25.552748 kubelet[2671]: E0213 15:33:25.552600 2671 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:33:25.553225 kubelet[2671]: E0213 15:33:25.553008 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:25.839788 kubelet[2671]: I0213 15:33:25.839658 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.839616411 podStartE2EDuration="1.839616411s" podCreationTimestamp="2025-02-13 15:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:25.839451607 +0000 UTC m=+1.640563702" watchObservedRunningTime="2025-02-13 15:33:25.839616411 +0000 UTC m=+1.640728507" Feb 13 15:33:26.295909 kubelet[2671]: E0213 15:33:26.295862 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:26.296443 kubelet[2671]: E0213 15:33:26.296101 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:26.327720 kubelet[2671]: I0213 15:33:26.327528 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.327481217 podStartE2EDuration="2.327481217s" podCreationTimestamp="2025-02-13 15:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:26.327180755 +0000 UTC m=+2.128292880" watchObservedRunningTime="2025-02-13 15:33:26.327481217 +0000 UTC m=+2.128593313" Feb 13 15:33:27.164204 sudo[1680]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:27.165745 sshd[1679]: Connection closed by 10.0.0.1 port 57168 Feb 13 15:33:27.166191 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:27.171263 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:57168.service: Deactivated successfully. Feb 13 15:33:27.173329 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:33:27.173518 systemd[1]: session-9.scope: Consumed 4.952s CPU time, 188.4M memory peak, 0B memory swap peak. Feb 13 15:33:27.174141 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:33:27.175177 systemd-logind[1469]: Removed session 9. Feb 13 15:33:27.298844 kubelet[2671]: E0213 15:33:27.298784 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:30.389248 kubelet[2671]: E0213 15:33:30.389212 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:31.304953 kubelet[2671]: E0213 15:33:31.304913 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:32.283447 update_engine[1473]: I20250213 15:33:32.283368 1473 update_attempter.cc:509] Updating boot flags... Feb 13 15:33:32.352216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2755) Feb 13 15:33:32.392300 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2759) Feb 13 15:33:32.418110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2759) Feb 13 15:33:34.500500 kubelet[2671]: E0213 15:33:34.500447 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:35.310627 kubelet[2671]: E0213 15:33:35.310598 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:36.365926 kubelet[2671]: E0213 15:33:36.365882 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:36.973611 kubelet[2671]: I0213 15:33:36.973583 2671 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:33:36.973940 containerd[1485]: time="2025-02-13T15:33:36.973892799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:33:36.974340 kubelet[2671]: I0213 15:33:36.974084 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:33:37.538117 kubelet[2671]: I0213 15:33:37.538041 2671 topology_manager.go:215] "Topology Admit Handler" podUID="6124af7a-ed58-43bf-9447-76ee4b6ab703" podNamespace="kube-system" podName="kube-proxy-qhbgv" Feb 13 15:33:37.538895 kubelet[2671]: I0213 15:33:37.538408 2671 topology_manager.go:215] "Topology Admit Handler" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" podNamespace="kube-system" podName="cilium-pltp4" Feb 13 15:33:37.549439 systemd[1]: Created slice kubepods-burstable-pod21cb29ba_ee78_45df_a9ab_80ef16c632c3.slice - libcontainer container kubepods-burstable-pod21cb29ba_ee78_45df_a9ab_80ef16c632c3.slice. Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556308 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cni-path\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556344 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6124af7a-ed58-43bf-9447-76ee4b6ab703-xtables-lock\") pod \"kube-proxy-qhbgv\" (UID: \"6124af7a-ed58-43bf-9447-76ee4b6ab703\") " pod="kube-system/kube-proxy-qhbgv" Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556364 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21cb29ba-ee78-45df-a9ab-80ef16c632c3-clustermesh-secrets\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556384 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6124af7a-ed58-43bf-9447-76ee4b6ab703-lib-modules\") pod \"kube-proxy-qhbgv\" (UID: \"6124af7a-ed58-43bf-9447-76ee4b6ab703\") " pod="kube-system/kube-proxy-qhbgv" Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556404 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hostproc\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.556769 kubelet[2671]: I0213 15:33:37.556422 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-etc-cni-netd\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.556326 systemd[1]: Created slice kubepods-besteffort-pod6124af7a_ed58_43bf_9447_76ee4b6ab703.slice - libcontainer container kubepods-besteffort-pod6124af7a_ed58_43bf_9447_76ee4b6ab703.slice. Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556438 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-lib-modules\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556455 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-net\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556472 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6124af7a-ed58-43bf-9447-76ee4b6ab703-kube-proxy\") pod \"kube-proxy-qhbgv\" (UID: \"6124af7a-ed58-43bf-9447-76ee4b6ab703\") " pod="kube-system/kube-proxy-qhbgv" Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556490 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpv5\" (UniqueName: \"kubernetes.io/projected/6124af7a-ed58-43bf-9447-76ee4b6ab703-kube-api-access-xtpv5\") pod \"kube-proxy-qhbgv\" (UID: \"6124af7a-ed58-43bf-9447-76ee4b6ab703\") " pod="kube-system/kube-proxy-qhbgv" Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556509 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-run\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557191 kubelet[2671]: I0213 15:33:37.556537 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-xtables-lock\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556556 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hubble-tls\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556598 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-cgroup\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556634 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hktn\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-kube-api-access-8hktn\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556664 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-bpf-maps\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556696 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-config-path\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.557362 kubelet[2671]: I0213 15:33:37.556716 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-kernel\") pod \"cilium-pltp4\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " pod="kube-system/cilium-pltp4" Feb 13 15:33:37.853338 kubelet[2671]: E0213 15:33:37.853188 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:37.854135 containerd[1485]: time="2025-02-13T15:33:37.854091443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pltp4,Uid:21cb29ba-ee78-45df-a9ab-80ef16c632c3,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:37.864506 kubelet[2671]: E0213 15:33:37.864440 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:37.865018 containerd[1485]: time="2025-02-13T15:33:37.864972973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhbgv,Uid:6124af7a-ed58-43bf-9447-76ee4b6ab703,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:37.870373 kubelet[2671]: I0213 15:33:37.870327 2671 topology_manager.go:215] "Topology Admit Handler" podUID="e42c96cf-3857-4e91-a31c-4b24632ca1ea" podNamespace="kube-system" podName="cilium-operator-5cc964979-chx6w" Feb 13 15:33:37.882002 systemd[1]: Created slice kubepods-besteffort-pode42c96cf_3857_4e91_a31c_4b24632ca1ea.slice - libcontainer container kubepods-besteffort-pode42c96cf_3857_4e91_a31c_4b24632ca1ea.slice. Feb 13 15:33:37.956462 containerd[1485]: time="2025-02-13T15:33:37.956231137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:37.956462 containerd[1485]: time="2025-02-13T15:33:37.956289738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:37.956462 containerd[1485]: time="2025-02-13T15:33:37.956300628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:37.956462 containerd[1485]: time="2025-02-13T15:33:37.956381011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:37.958681 containerd[1485]: time="2025-02-13T15:33:37.958088067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:37.958681 containerd[1485]: time="2025-02-13T15:33:37.958136920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:37.958681 containerd[1485]: time="2025-02-13T15:33:37.958150716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:37.958681 containerd[1485]: time="2025-02-13T15:33:37.958276393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:37.959844 kubelet[2671]: I0213 15:33:37.959813 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42c96cf-3857-4e91-a31c-4b24632ca1ea-cilium-config-path\") pod \"cilium-operator-5cc964979-chx6w\" (UID: \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\") " pod="kube-system/cilium-operator-5cc964979-chx6w" Feb 13 15:33:37.959981 kubelet[2671]: I0213 15:33:37.959868 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/e42c96cf-3857-4e91-a31c-4b24632ca1ea-kube-api-access-w7l9c\") pod \"cilium-operator-5cc964979-chx6w\" (UID: \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\") " pod="kube-system/cilium-operator-5cc964979-chx6w" Feb 13 15:33:37.982211 systemd[1]: Started cri-containerd-4c2ab6c11d1acec26ca68924171ae0b6d0171a5f05a4d61db7a31269700759ab.scope - libcontainer container 4c2ab6c11d1acec26ca68924171ae0b6d0171a5f05a4d61db7a31269700759ab. Feb 13 15:33:37.985718 systemd[1]: Started cri-containerd-25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871.scope - libcontainer container 25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871. Feb 13 15:33:38.006569 containerd[1485]: time="2025-02-13T15:33:38.006526059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhbgv,Uid:6124af7a-ed58-43bf-9447-76ee4b6ab703,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c2ab6c11d1acec26ca68924171ae0b6d0171a5f05a4d61db7a31269700759ab\"" Feb 13 15:33:38.007601 kubelet[2671]: E0213 15:33:38.007492 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:38.010050 containerd[1485]: time="2025-02-13T15:33:38.010012356Z" level=info msg="CreateContainer within sandbox \"4c2ab6c11d1acec26ca68924171ae0b6d0171a5f05a4d61db7a31269700759ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:33:38.014630 containerd[1485]: time="2025-02-13T15:33:38.014570569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pltp4,Uid:21cb29ba-ee78-45df-a9ab-80ef16c632c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\"" Feb 13 15:33:38.015629 kubelet[2671]: E0213 15:33:38.015263 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:38.016329 containerd[1485]: time="2025-02-13T15:33:38.016299666Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:33:38.029651 containerd[1485]: time="2025-02-13T15:33:38.029589627Z" level=info msg="CreateContainer within sandbox \"4c2ab6c11d1acec26ca68924171ae0b6d0171a5f05a4d61db7a31269700759ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0c9ba096c55257fc6157686738aa7bcbf2616e2ad546a2c980eccfeae614976\"" Feb 13 15:33:38.030322 containerd[1485]: time="2025-02-13T15:33:38.030151559Z" level=info msg="StartContainer for \"e0c9ba096c55257fc6157686738aa7bcbf2616e2ad546a2c980eccfeae614976\"" Feb 13 15:33:38.056250 systemd[1]: Started cri-containerd-e0c9ba096c55257fc6157686738aa7bcbf2616e2ad546a2c980eccfeae614976.scope - libcontainer container e0c9ba096c55257fc6157686738aa7bcbf2616e2ad546a2c980eccfeae614976. Feb 13 15:33:38.089633 containerd[1485]: time="2025-02-13T15:33:38.089480639Z" level=info msg="StartContainer for \"e0c9ba096c55257fc6157686738aa7bcbf2616e2ad546a2c980eccfeae614976\" returns successfully" Feb 13 15:33:38.184982 kubelet[2671]: E0213 15:33:38.184844 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:38.185593 containerd[1485]: time="2025-02-13T15:33:38.185393121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-chx6w,Uid:e42c96cf-3857-4e91-a31c-4b24632ca1ea,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:38.211115 containerd[1485]: time="2025-02-13T15:33:38.210720776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:38.211115 containerd[1485]: time="2025-02-13T15:33:38.210852515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:38.211115 containerd[1485]: time="2025-02-13T15:33:38.210879116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:38.211115 containerd[1485]: time="2025-02-13T15:33:38.211025472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:38.231215 systemd[1]: Started cri-containerd-b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94.scope - libcontainer container b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94. Feb 13 15:33:38.269277 containerd[1485]: time="2025-02-13T15:33:38.269240467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-chx6w,Uid:e42c96cf-3857-4e91-a31c-4b24632ca1ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\"" Feb 13 15:33:38.270222 kubelet[2671]: E0213 15:33:38.270168 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:38.316899 kubelet[2671]: E0213 15:33:38.316869 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:38.329209 kubelet[2671]: I0213 15:33:38.328641 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qhbgv" podStartSLOduration=1.328600866 podStartE2EDuration="1.328600866s" podCreationTimestamp="2025-02-13 15:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:38.328435924 +0000 UTC m=+14.129548019" watchObservedRunningTime="2025-02-13 15:33:38.328600866 +0000 UTC m=+14.129712961" Feb 13 15:33:46.159494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779788570.mount: Deactivated successfully. Feb 13 15:33:50.614436 containerd[1485]: time="2025-02-13T15:33:50.614378066Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:50.617993 containerd[1485]: time="2025-02-13T15:33:50.617949938Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:33:50.619248 containerd[1485]: time="2025-02-13T15:33:50.619185844Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:50.620593 containerd[1485]: time="2025-02-13T15:33:50.620559138Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.604227121s" Feb 13 15:33:50.620593 containerd[1485]: time="2025-02-13T15:33:50.620590427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:33:50.623566 containerd[1485]: time="2025-02-13T15:33:50.623413108Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:33:50.624749 containerd[1485]: time="2025-02-13T15:33:50.624721701Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:33:50.645499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322895605.mount: Deactivated successfully. Feb 13 15:33:50.648208 containerd[1485]: time="2025-02-13T15:33:50.648177282Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\"" Feb 13 15:33:50.648579 containerd[1485]: time="2025-02-13T15:33:50.648558549Z" level=info msg="StartContainer for \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\"" Feb 13 15:33:50.683276 systemd[1]: Started cri-containerd-61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c.scope - libcontainer container 61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c. Feb 13 15:33:50.711889 containerd[1485]: time="2025-02-13T15:33:50.711840085Z" level=info msg="StartContainer for \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\" returns successfully" Feb 13 15:33:50.721143 systemd[1]: cri-containerd-61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c.scope: Deactivated successfully. Feb 13 15:33:51.247862 containerd[1485]: time="2025-02-13T15:33:51.247795996Z" level=info msg="shim disconnected" id=61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c namespace=k8s.io Feb 13 15:33:51.247862 containerd[1485]: time="2025-02-13T15:33:51.247852993Z" level=warning msg="cleaning up after shim disconnected" id=61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c namespace=k8s.io Feb 13 15:33:51.247862 containerd[1485]: time="2025-02-13T15:33:51.247862491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:51.259932 containerd[1485]: time="2025-02-13T15:33:51.259880532Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:33:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:33:51.345982 kubelet[2671]: E0213 15:33:51.345937 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:51.348269 containerd[1485]: time="2025-02-13T15:33:51.348012388Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:33:51.644442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c-rootfs.mount: Deactivated successfully. Feb 13 15:33:52.085202 containerd[1485]: time="2025-02-13T15:33:52.085141950Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\"" Feb 13 15:33:52.085721 containerd[1485]: time="2025-02-13T15:33:52.085689149Z" level=info msg="StartContainer for \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\"" Feb 13 15:33:52.116208 systemd[1]: Started cri-containerd-2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712.scope - libcontainer container 2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712. Feb 13 15:33:52.152908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:33:52.153147 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:52.153227 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:52.158378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:52.158617 systemd[1]: cri-containerd-2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712.scope: Deactivated successfully. Feb 13 15:33:52.196625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:52.263867 containerd[1485]: time="2025-02-13T15:33:52.263796621Z" level=info msg="StartContainer for \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\" returns successfully" Feb 13 15:33:52.629991 containerd[1485]: time="2025-02-13T15:33:52.629854020Z" level=info msg="shim disconnected" id=2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712 namespace=k8s.io Feb 13 15:33:52.629991 containerd[1485]: time="2025-02-13T15:33:52.629942727Z" level=warning msg="cleaning up after shim disconnected" id=2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712 namespace=k8s.io Feb 13 15:33:52.629991 containerd[1485]: time="2025-02-13T15:33:52.629976680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:52.643818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712-rootfs.mount: Deactivated successfully. Feb 13 15:33:53.347974 kubelet[2671]: E0213 15:33:53.347944 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:53.349446 containerd[1485]: time="2025-02-13T15:33:53.349416744Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:33:53.715356 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:44594.service - OpenSSH per-connection server daemon (10.0.0.1:44594). Feb 13 15:33:53.763225 sshd[3208]: Accepted publickey for core from 10.0.0.1 port 44594 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:53.765232 sshd-session[3208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:53.817438 systemd-logind[1469]: New session 10 of user core. Feb 13 15:33:53.827203 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:33:53.834881 containerd[1485]: time="2025-02-13T15:33:53.834821159Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\"" Feb 13 15:33:53.835500 containerd[1485]: time="2025-02-13T15:33:53.835442429Z" level=info msg="StartContainer for \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\"" Feb 13 15:33:53.870230 systemd[1]: Started cri-containerd-1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394.scope - libcontainer container 1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394. Feb 13 15:33:53.907945 containerd[1485]: time="2025-02-13T15:33:53.907892666Z" level=info msg="StartContainer for \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\" returns successfully" Feb 13 15:33:53.909299 systemd[1]: cri-containerd-1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394.scope: Deactivated successfully. Feb 13 15:33:53.932547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394-rootfs.mount: Deactivated successfully. Feb 13 15:33:53.940948 containerd[1485]: time="2025-02-13T15:33:53.940878994Z" level=info msg="shim disconnected" id=1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394 namespace=k8s.io Feb 13 15:33:53.940948 containerd[1485]: time="2025-02-13T15:33:53.940931924Z" level=warning msg="cleaning up after shim disconnected" id=1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394 namespace=k8s.io Feb 13 15:33:53.940948 containerd[1485]: time="2025-02-13T15:33:53.940941362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:53.992673 sshd[3210]: Connection closed by 10.0.0.1 port 44594 Feb 13 15:33:53.992921 sshd-session[3208]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:53.996829 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:44594.service: Deactivated successfully. Feb 13 15:33:53.998879 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:33:53.999511 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:33:54.000485 systemd-logind[1469]: Removed session 10. Feb 13 15:33:54.354669 kubelet[2671]: E0213 15:33:54.354540 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:54.356686 containerd[1485]: time="2025-02-13T15:33:54.356640222Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:33:54.740997 containerd[1485]: time="2025-02-13T15:33:54.740939119Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\"" Feb 13 15:33:54.741410 containerd[1485]: time="2025-02-13T15:33:54.741378996Z" level=info msg="StartContainer for \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\"" Feb 13 15:33:54.766234 systemd[1]: Started cri-containerd-faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc.scope - libcontainer container faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc. Feb 13 15:33:54.813919 systemd[1]: cri-containerd-faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc.scope: Deactivated successfully. Feb 13 15:33:54.998532 containerd[1485]: time="2025-02-13T15:33:54.998128302Z" level=info msg="StartContainer for \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\" returns successfully" Feb 13 15:33:55.014721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc-rootfs.mount: Deactivated successfully. Feb 13 15:33:55.137469 containerd[1485]: time="2025-02-13T15:33:55.137390144Z" level=info msg="shim disconnected" id=faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc namespace=k8s.io Feb 13 15:33:55.137469 containerd[1485]: time="2025-02-13T15:33:55.137457851Z" level=warning msg="cleaning up after shim disconnected" id=faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc namespace=k8s.io Feb 13 15:33:55.137469 containerd[1485]: time="2025-02-13T15:33:55.137467860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:33:55.356105 kubelet[2671]: E0213 15:33:55.355851 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:55.358998 containerd[1485]: time="2025-02-13T15:33:55.358827610Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:33:55.381129 containerd[1485]: time="2025-02-13T15:33:55.381058165Z" level=info msg="CreateContainer within sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\"" Feb 13 15:33:55.381733 containerd[1485]: time="2025-02-13T15:33:55.381640160Z" level=info msg="StartContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\"" Feb 13 15:33:55.415282 systemd[1]: Started cri-containerd-f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5.scope - libcontainer container f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5. Feb 13 15:33:55.447287 containerd[1485]: time="2025-02-13T15:33:55.447232998Z" level=info msg="StartContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" returns successfully" Feb 13 15:33:55.617046 kubelet[2671]: I0213 15:33:55.616685 2671 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:33:55.641438 kubelet[2671]: I0213 15:33:55.641383 2671 topology_manager.go:215] "Topology Admit Handler" podUID="61539883-bbad-49fa-ab74-a4ceae0da504" podNamespace="kube-system" podName="coredns-76f75df574-64spp" Feb 13 15:33:55.644272 kubelet[2671]: I0213 15:33:55.644241 2671 topology_manager.go:215] "Topology Admit Handler" podUID="f745ebda-6a5e-44ef-8400-f44b3e467da8" podNamespace="kube-system" podName="coredns-76f75df574-wwnv9" Feb 13 15:33:55.661398 systemd[1]: Created slice kubepods-burstable-pod61539883_bbad_49fa_ab74_a4ceae0da504.slice - libcontainer container kubepods-burstable-pod61539883_bbad_49fa_ab74_a4ceae0da504.slice. Feb 13 15:33:55.667932 systemd[1]: Created slice kubepods-burstable-podf745ebda_6a5e_44ef_8400_f44b3e467da8.slice - libcontainer container kubepods-burstable-podf745ebda_6a5e_44ef_8400_f44b3e467da8.slice. Feb 13 15:33:55.685169 kubelet[2671]: I0213 15:33:55.685122 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2sp\" (UniqueName: \"kubernetes.io/projected/61539883-bbad-49fa-ab74-a4ceae0da504-kube-api-access-wl2sp\") pod \"coredns-76f75df574-64spp\" (UID: \"61539883-bbad-49fa-ab74-a4ceae0da504\") " pod="kube-system/coredns-76f75df574-64spp" Feb 13 15:33:55.685319 kubelet[2671]: I0213 15:33:55.685232 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrf2\" (UniqueName: \"kubernetes.io/projected/f745ebda-6a5e-44ef-8400-f44b3e467da8-kube-api-access-mhrf2\") pod \"coredns-76f75df574-wwnv9\" (UID: \"f745ebda-6a5e-44ef-8400-f44b3e467da8\") " pod="kube-system/coredns-76f75df574-wwnv9" Feb 13 15:33:55.685661 kubelet[2671]: I0213 15:33:55.685628 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61539883-bbad-49fa-ab74-a4ceae0da504-config-volume\") pod \"coredns-76f75df574-64spp\" (UID: \"61539883-bbad-49fa-ab74-a4ceae0da504\") " pod="kube-system/coredns-76f75df574-64spp" Feb 13 15:33:55.686300 kubelet[2671]: I0213 15:33:55.686271 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f745ebda-6a5e-44ef-8400-f44b3e467da8-config-volume\") pod \"coredns-76f75df574-wwnv9\" (UID: \"f745ebda-6a5e-44ef-8400-f44b3e467da8\") " pod="kube-system/coredns-76f75df574-wwnv9" Feb 13 15:33:55.707262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093646981.mount: Deactivated successfully. Feb 13 15:33:55.909933 containerd[1485]: time="2025-02-13T15:33:55.909804512Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:55.910818 containerd[1485]: time="2025-02-13T15:33:55.910759648Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:33:55.912128 containerd[1485]: time="2025-02-13T15:33:55.912094808Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:55.913451 containerd[1485]: time="2025-02-13T15:33:55.913416923Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.289919346s" Feb 13 15:33:55.913451 containerd[1485]: time="2025-02-13T15:33:55.913450247Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:33:55.915600 containerd[1485]: time="2025-02-13T15:33:55.915499609Z" level=info msg="CreateContainer within sandbox \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:33:55.932423 containerd[1485]: time="2025-02-13T15:33:55.932369726Z" level=info msg="CreateContainer within sandbox \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\"" Feb 13 15:33:55.932897 containerd[1485]: time="2025-02-13T15:33:55.932876769Z" level=info msg="StartContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\"" Feb 13 15:33:55.959236 systemd[1]: Started cri-containerd-bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7.scope - libcontainer container bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7. Feb 13 15:33:55.965520 kubelet[2671]: E0213 15:33:55.965401 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:55.967304 containerd[1485]: time="2025-02-13T15:33:55.966955710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-64spp,Uid:61539883-bbad-49fa-ab74-a4ceae0da504,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:55.972040 kubelet[2671]: E0213 15:33:55.972008 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:55.972693 containerd[1485]: time="2025-02-13T15:33:55.972647852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wwnv9,Uid:f745ebda-6a5e-44ef-8400-f44b3e467da8,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:56.087435 containerd[1485]: time="2025-02-13T15:33:56.087367366Z" level=info msg="StartContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" returns successfully" Feb 13 15:33:56.364458 kubelet[2671]: E0213 15:33:56.364424 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:56.365823 kubelet[2671]: E0213 15:33:56.365796 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:56.579256 kubelet[2671]: I0213 15:33:56.579015 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pltp4" podStartSLOduration=6.971642357 podStartE2EDuration="19.578970597s" podCreationTimestamp="2025-02-13 15:33:37 +0000 UTC" firstStartedPulling="2025-02-13 15:33:38.0158043 +0000 UTC m=+13.816916395" lastFinishedPulling="2025-02-13 15:33:50.62313254 +0000 UTC m=+26.424244635" observedRunningTime="2025-02-13 15:33:56.57883944 +0000 UTC m=+32.379951566" watchObservedRunningTime="2025-02-13 15:33:56.578970597 +0000 UTC m=+32.380082693" Feb 13 15:33:57.368087 kubelet[2671]: E0213 15:33:57.368029 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:57.368087 kubelet[2671]: E0213 15:33:57.368063 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:58.370241 kubelet[2671]: E0213 15:33:58.370202 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:59.003767 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:56506.service - OpenSSH per-connection server daemon (10.0.0.1:56506). Feb 13 15:33:59.051367 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 56506 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:59.052940 sshd-session[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:59.057011 systemd-logind[1469]: New session 11 of user core. Feb 13 15:33:59.065193 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:33:59.184501 sshd[3531]: Connection closed by 10.0.0.1 port 56506 Feb 13 15:33:59.184882 sshd-session[3529]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:59.188944 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:56506.service: Deactivated successfully. Feb 13 15:33:59.190996 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:33:59.191652 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:33:59.192608 systemd-logind[1469]: Removed session 11. Feb 13 15:33:59.715515 systemd-networkd[1400]: cilium_host: Link UP Feb 13 15:33:59.715988 systemd-networkd[1400]: cilium_net: Link UP Feb 13 15:33:59.716418 systemd-networkd[1400]: cilium_net: Gained carrier Feb 13 15:33:59.716801 systemd-networkd[1400]: cilium_host: Gained carrier Feb 13 15:33:59.817906 systemd-networkd[1400]: cilium_vxlan: Link UP Feb 13 15:33:59.817916 systemd-networkd[1400]: cilium_vxlan: Gained carrier Feb 13 15:34:00.001237 systemd-networkd[1400]: cilium_net: Gained IPv6LL Feb 13 15:34:00.019100 kernel: NET: Registered PF_ALG protocol family Feb 13 15:34:00.103216 systemd-networkd[1400]: cilium_host: Gained IPv6LL Feb 13 15:34:00.677854 systemd-networkd[1400]: lxc_health: Link UP Feb 13 15:34:00.684541 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 15:34:01.127755 systemd-networkd[1400]: lxc28ab46710b13: Link UP Feb 13 15:34:01.134093 kernel: eth0: renamed from tmp66e4e Feb 13 15:34:01.138719 systemd-networkd[1400]: lxc28ab46710b13: Gained carrier Feb 13 15:34:01.156867 systemd-networkd[1400]: lxc84e79b04c570: Link UP Feb 13 15:34:01.166154 kernel: eth0: renamed from tmpfa54d Feb 13 15:34:01.171929 systemd-networkd[1400]: lxc84e79b04c570: Gained carrier Feb 13 15:34:01.753194 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Feb 13 15:34:01.855581 kubelet[2671]: E0213 15:34:01.855539 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.943262 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 15:34:01.982257 kubelet[2671]: I0213 15:34:01.982178 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-chx6w" podStartSLOduration=7.339318071 podStartE2EDuration="24.982134546s" podCreationTimestamp="2025-02-13 15:33:37 +0000 UTC" firstStartedPulling="2025-02-13 15:33:38.270875837 +0000 UTC m=+14.071987942" lastFinishedPulling="2025-02-13 15:33:55.913692322 +0000 UTC m=+31.714804417" observedRunningTime="2025-02-13 15:33:56.593710795 +0000 UTC m=+32.394822901" watchObservedRunningTime="2025-02-13 15:34:01.982134546 +0000 UTC m=+37.783246641" Feb 13 15:34:02.839263 systemd-networkd[1400]: lxc84e79b04c570: Gained IPv6LL Feb 13 15:34:03.223219 systemd-networkd[1400]: lxc28ab46710b13: Gained IPv6LL Feb 13 15:34:04.155706 kubelet[2671]: I0213 15:34:04.155637 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:34:04.156941 kubelet[2671]: E0213 15:34:04.156869 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:04.210454 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:56520.service - OpenSSH per-connection server daemon (10.0.0.1:56520). Feb 13 15:34:04.264936 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 56520 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:04.266956 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:04.271268 systemd-logind[1469]: New session 12 of user core. Feb 13 15:34:04.280302 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:34:04.380958 kubelet[2671]: E0213 15:34:04.380763 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:04.437131 sshd[3929]: Connection closed by 10.0.0.1 port 56520 Feb 13 15:34:04.440147 sshd-session[3927]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:04.453008 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:56520.service: Deactivated successfully. Feb 13 15:34:04.456722 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:34:04.460134 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:34:04.461259 systemd-logind[1469]: Removed session 12. Feb 13 15:34:04.620248 containerd[1485]: time="2025-02-13T15:34:04.620143348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:04.620248 containerd[1485]: time="2025-02-13T15:34:04.620201578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:04.620703 containerd[1485]: time="2025-02-13T15:34:04.620227046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:04.620703 containerd[1485]: time="2025-02-13T15:34:04.620314580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:04.622627 containerd[1485]: time="2025-02-13T15:34:04.622451103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:04.622627 containerd[1485]: time="2025-02-13T15:34:04.622504793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:04.622627 containerd[1485]: time="2025-02-13T15:34:04.622515153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:04.622627 containerd[1485]: time="2025-02-13T15:34:04.622601665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:04.647229 systemd[1]: Started cri-containerd-66e4e44b7eba712aa84e80837fe992d19d4cd2f6f0c7ea92546936428f248543.scope - libcontainer container 66e4e44b7eba712aa84e80837fe992d19d4cd2f6f0c7ea92546936428f248543. Feb 13 15:34:04.648708 systemd[1]: Started cri-containerd-fa54dd25cba1ecb1db54a56e0ad35a764fcc13623c6c69034b9f6eb32f838475.scope - libcontainer container fa54dd25cba1ecb1db54a56e0ad35a764fcc13623c6c69034b9f6eb32f838475. Feb 13 15:34:04.660668 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:34:04.662365 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:34:04.686405 containerd[1485]: time="2025-02-13T15:34:04.686364331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-64spp,Uid:61539883-bbad-49fa-ab74-a4ceae0da504,Namespace:kube-system,Attempt:0,} returns sandbox id \"66e4e44b7eba712aa84e80837fe992d19d4cd2f6f0c7ea92546936428f248543\"" Feb 13 15:34:04.688567 kubelet[2671]: E0213 15:34:04.687442 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:04.688707 containerd[1485]: time="2025-02-13T15:34:04.688335813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wwnv9,Uid:f745ebda-6a5e-44ef-8400-f44b3e467da8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa54dd25cba1ecb1db54a56e0ad35a764fcc13623c6c69034b9f6eb32f838475\"" Feb 13 15:34:04.689663 kubelet[2671]: E0213 15:34:04.689431 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:04.690826 containerd[1485]: time="2025-02-13T15:34:04.690805111Z" level=info msg="CreateContainer within sandbox \"66e4e44b7eba712aa84e80837fe992d19d4cd2f6f0c7ea92546936428f248543\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:04.695482 containerd[1485]: time="2025-02-13T15:34:04.695398577Z" level=info msg="CreateContainer within sandbox \"fa54dd25cba1ecb1db54a56e0ad35a764fcc13623c6c69034b9f6eb32f838475\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:04.719646 containerd[1485]: time="2025-02-13T15:34:04.719588425Z" level=info msg="CreateContainer within sandbox \"66e4e44b7eba712aa84e80837fe992d19d4cd2f6f0c7ea92546936428f248543\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6146a244db623eb518f262b7cfd39f7f346ec91162a767443288b9ba2d48694f\"" Feb 13 15:34:04.720146 containerd[1485]: time="2025-02-13T15:34:04.720092923Z" level=info msg="StartContainer for \"6146a244db623eb518f262b7cfd39f7f346ec91162a767443288b9ba2d48694f\"" Feb 13 15:34:04.724845 containerd[1485]: time="2025-02-13T15:34:04.724803319Z" level=info msg="CreateContainer within sandbox \"fa54dd25cba1ecb1db54a56e0ad35a764fcc13623c6c69034b9f6eb32f838475\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c761f4bb8ba299f6dbeda23e94e837c6c92cd9355dd47dc1e1e148fc240c1a2\"" Feb 13 15:34:04.725320 containerd[1485]: time="2025-02-13T15:34:04.725278471Z" level=info msg="StartContainer for \"5c761f4bb8ba299f6dbeda23e94e837c6c92cd9355dd47dc1e1e148fc240c1a2\"" Feb 13 15:34:04.746302 systemd[1]: Started cri-containerd-6146a244db623eb518f262b7cfd39f7f346ec91162a767443288b9ba2d48694f.scope - libcontainer container 6146a244db623eb518f262b7cfd39f7f346ec91162a767443288b9ba2d48694f. Feb 13 15:34:04.748846 systemd[1]: Started cri-containerd-5c761f4bb8ba299f6dbeda23e94e837c6c92cd9355dd47dc1e1e148fc240c1a2.scope - libcontainer container 5c761f4bb8ba299f6dbeda23e94e837c6c92cd9355dd47dc1e1e148fc240c1a2. Feb 13 15:34:04.784432 containerd[1485]: time="2025-02-13T15:34:04.784298410Z" level=info msg="StartContainer for \"5c761f4bb8ba299f6dbeda23e94e837c6c92cd9355dd47dc1e1e148fc240c1a2\" returns successfully" Feb 13 15:34:04.784432 containerd[1485]: time="2025-02-13T15:34:04.784336642Z" level=info msg="StartContainer for \"6146a244db623eb518f262b7cfd39f7f346ec91162a767443288b9ba2d48694f\" returns successfully" Feb 13 15:34:05.383374 kubelet[2671]: E0213 15:34:05.383335 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:05.385663 kubelet[2671]: E0213 15:34:05.385629 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:05.430476 kubelet[2671]: I0213 15:34:05.430345 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wwnv9" podStartSLOduration=28.430304763 podStartE2EDuration="28.430304763s" podCreationTimestamp="2025-02-13 15:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:05.429780238 +0000 UTC m=+41.230892333" watchObservedRunningTime="2025-02-13 15:34:05.430304763 +0000 UTC m=+41.231416858" Feb 13 15:34:05.454011 kubelet[2671]: I0213 15:34:05.453863 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-64spp" podStartSLOduration=28.453821673 podStartE2EDuration="28.453821673s" podCreationTimestamp="2025-02-13 15:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:05.443422997 +0000 UTC m=+41.244535092" watchObservedRunningTime="2025-02-13 15:34:05.453821673 +0000 UTC m=+41.254933768" Feb 13 15:34:06.387300 kubelet[2671]: E0213 15:34:06.387271 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:06.387746 kubelet[2671]: E0213 15:34:06.387372 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.389011 kubelet[2671]: E0213 15:34:07.388967 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.389649 kubelet[2671]: E0213 15:34:07.389620 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:09.451427 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Feb 13 15:34:09.505835 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:09.507477 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:09.512041 systemd-logind[1469]: New session 13 of user core. Feb 13 15:34:09.526318 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:34:09.645123 sshd[4123]: Connection closed by 10.0.0.1 port 39136 Feb 13 15:34:09.645467 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:09.648954 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:39136.service: Deactivated successfully. Feb 13 15:34:09.651234 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:34:09.651856 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:34:09.653207 systemd-logind[1469]: Removed session 13. Feb 13 15:34:14.659007 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:39140.service - OpenSSH per-connection server daemon (10.0.0.1:39140). Feb 13 15:34:14.702565 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 39140 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:14.704053 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:14.707775 systemd-logind[1469]: New session 14 of user core. Feb 13 15:34:14.715198 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:34:14.824271 sshd[4138]: Connection closed by 10.0.0.1 port 39140 Feb 13 15:34:14.824612 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:14.835187 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:39140.service: Deactivated successfully. Feb 13 15:34:14.837250 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:34:14.838700 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:34:14.844372 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:39148.service - OpenSSH per-connection server daemon (10.0.0.1:39148). Feb 13 15:34:14.845563 systemd-logind[1469]: Removed session 14. Feb 13 15:34:14.886472 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 39148 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:14.888159 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:14.892985 systemd-logind[1469]: New session 15 of user core. Feb 13 15:34:14.907260 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:34:15.078174 sshd[4154]: Connection closed by 10.0.0.1 port 39148 Feb 13 15:34:15.079041 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:15.091367 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:39148.service: Deactivated successfully. Feb 13 15:34:15.093176 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:34:15.095169 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:34:15.101504 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:39164.service - OpenSSH per-connection server daemon (10.0.0.1:39164). Feb 13 15:34:15.103425 systemd-logind[1469]: Removed session 15. Feb 13 15:34:15.142289 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 39164 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:15.144120 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:15.148362 systemd-logind[1469]: New session 16 of user core. Feb 13 15:34:15.155277 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:34:15.268010 sshd[4167]: Connection closed by 10.0.0.1 port 39164 Feb 13 15:34:15.268412 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:15.272234 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:39164.service: Deactivated successfully. Feb 13 15:34:15.274551 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:34:15.275382 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:34:15.276283 systemd-logind[1469]: Removed session 16. Feb 13 15:34:20.279485 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:41990.service - OpenSSH per-connection server daemon (10.0.0.1:41990). Feb 13 15:34:20.323201 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 41990 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:20.324908 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:20.329201 systemd-logind[1469]: New session 17 of user core. Feb 13 15:34:20.338238 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:34:20.444336 sshd[4181]: Connection closed by 10.0.0.1 port 41990 Feb 13 15:34:20.444689 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:20.448327 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:41990.service: Deactivated successfully. Feb 13 15:34:20.450339 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:34:20.451016 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:34:20.451962 systemd-logind[1469]: Removed session 17. Feb 13 15:34:25.456647 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:41996.service - OpenSSH per-connection server daemon (10.0.0.1:41996). Feb 13 15:34:25.501890 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:25.503517 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:25.507327 systemd-logind[1469]: New session 18 of user core. Feb 13 15:34:25.517198 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:34:25.622834 sshd[4198]: Connection closed by 10.0.0.1 port 41996 Feb 13 15:34:25.623333 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:25.635776 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:41996.service: Deactivated successfully. Feb 13 15:34:25.637832 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:34:25.639415 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:34:25.646351 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Feb 13 15:34:25.647201 systemd-logind[1469]: Removed session 18. Feb 13 15:34:25.686986 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:25.688641 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:25.692916 systemd-logind[1469]: New session 19 of user core. Feb 13 15:34:25.698226 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:34:25.868228 sshd[4212]: Connection closed by 10.0.0.1 port 42012 Feb 13 15:34:25.868634 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:25.881059 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:42012.service: Deactivated successfully. Feb 13 15:34:25.883053 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:34:25.884886 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:34:25.890408 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:42014.service - OpenSSH per-connection server daemon (10.0.0.1:42014). Feb 13 15:34:25.891623 systemd-logind[1469]: Removed session 19. Feb 13 15:34:25.940161 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 42014 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:25.941869 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:25.947176 systemd-logind[1469]: New session 20 of user core. Feb 13 15:34:25.954194 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:34:27.684135 sshd[4224]: Connection closed by 10.0.0.1 port 42014 Feb 13 15:34:27.684780 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:27.696769 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:42014.service: Deactivated successfully. Feb 13 15:34:27.699865 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:34:27.702748 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:34:27.709455 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:47392.service - OpenSSH per-connection server daemon (10.0.0.1:47392). Feb 13 15:34:27.710447 systemd-logind[1469]: Removed session 20. Feb 13 15:34:27.752870 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 47392 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:27.754537 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:27.758732 systemd-logind[1469]: New session 21 of user core. Feb 13 15:34:27.768262 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:34:28.137192 sshd[4246]: Connection closed by 10.0.0.1 port 47392 Feb 13 15:34:28.137851 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:28.149761 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:47392.service: Deactivated successfully. Feb 13 15:34:28.151363 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:34:28.152962 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:34:28.162407 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). Feb 13 15:34:28.163558 systemd-logind[1469]: Removed session 21. Feb 13 15:34:28.200624 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:28.202276 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:28.206124 systemd-logind[1469]: New session 22 of user core. Feb 13 15:34:28.215182 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:34:28.323056 sshd[4259]: Connection closed by 10.0.0.1 port 47394 Feb 13 15:34:28.323437 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:28.327418 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:47394.service: Deactivated successfully. Feb 13 15:34:28.329372 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:34:28.330035 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:34:28.331000 systemd-logind[1469]: Removed session 22. Feb 13 15:34:33.335251 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:47406.service - OpenSSH per-connection server daemon (10.0.0.1:47406). Feb 13 15:34:33.378846 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:33.380634 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:33.385098 systemd-logind[1469]: New session 23 of user core. Feb 13 15:34:33.391215 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:34:33.504514 sshd[4273]: Connection closed by 10.0.0.1 port 47406 Feb 13 15:34:33.504905 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:33.509437 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:47406.service: Deactivated successfully. Feb 13 15:34:33.511520 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:34:33.512177 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:34:33.513209 systemd-logind[1469]: Removed session 23. Feb 13 15:34:38.520027 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:53228.service - OpenSSH per-connection server daemon (10.0.0.1:53228). Feb 13 15:34:38.563398 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 53228 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:38.564757 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:38.568681 systemd-logind[1469]: New session 24 of user core. Feb 13 15:34:38.575195 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:34:38.691389 sshd[4294]: Connection closed by 10.0.0.1 port 53228 Feb 13 15:34:38.691757 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:38.695662 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:53228.service: Deactivated successfully. Feb 13 15:34:38.697413 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:34:38.697979 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:34:38.698723 systemd-logind[1469]: Removed session 24. Feb 13 15:34:43.702886 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:53236.service - OpenSSH per-connection server daemon (10.0.0.1:53236). Feb 13 15:34:43.746314 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 53236 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:43.747745 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:43.751310 systemd-logind[1469]: New session 25 of user core. Feb 13 15:34:43.761189 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:34:43.869171 sshd[4308]: Connection closed by 10.0.0.1 port 53236 Feb 13 15:34:43.869547 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:43.873205 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:53236.service: Deactivated successfully. Feb 13 15:34:43.875297 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:34:43.876052 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:34:43.876960 systemd-logind[1469]: Removed session 25. Feb 13 15:34:46.278332 kubelet[2671]: E0213 15:34:46.278280 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:48.884105 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Feb 13 15:34:48.927717 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:48.929144 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:48.932668 systemd-logind[1469]: New session 26 of user core. Feb 13 15:34:48.946216 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:34:49.071143 sshd[4322]: Connection closed by 10.0.0.1 port 33786 Feb 13 15:34:49.071619 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:49.082504 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:33786.service: Deactivated successfully. Feb 13 15:34:49.084720 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:34:49.086540 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:34:49.095340 systemd[1]: Started sshd@26-10.0.0.112:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). Feb 13 15:34:49.096417 systemd-logind[1469]: Removed session 26. Feb 13 15:34:49.134808 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:49.136103 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:49.139933 systemd-logind[1469]: New session 27 of user core. Feb 13 15:34:49.154211 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:34:50.485432 containerd[1485]: time="2025-02-13T15:34:50.485256776Z" level=info msg="StopContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" with timeout 30 (s)" Feb 13 15:34:50.486812 containerd[1485]: time="2025-02-13T15:34:50.486768545Z" level=info msg="Stop container \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" with signal terminated" Feb 13 15:34:50.500379 systemd[1]: cri-containerd-bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7.scope: Deactivated successfully. Feb 13 15:34:50.517891 containerd[1485]: time="2025-02-13T15:34:50.517828300Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:50.518428 containerd[1485]: time="2025-02-13T15:34:50.518365403Z" level=info msg="StopContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" with timeout 2 (s)" Feb 13 15:34:50.518778 containerd[1485]: time="2025-02-13T15:34:50.518757199Z" level=info msg="Stop container \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" with signal terminated" Feb 13 15:34:50.526844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7-rootfs.mount: Deactivated successfully. Feb 13 15:34:50.528100 systemd-networkd[1400]: lxc_health: Link DOWN Feb 13 15:34:50.528107 systemd-networkd[1400]: lxc_health: Lost carrier Feb 13 15:34:50.553682 systemd[1]: cri-containerd-f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5.scope: Deactivated successfully. Feb 13 15:34:50.554023 systemd[1]: cri-containerd-f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5.scope: Consumed 6.813s CPU time. Feb 13 15:34:50.573754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5-rootfs.mount: Deactivated successfully. Feb 13 15:34:50.764613 containerd[1485]: time="2025-02-13T15:34:50.764437303Z" level=info msg="shim disconnected" id=f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5 namespace=k8s.io Feb 13 15:34:50.764613 containerd[1485]: time="2025-02-13T15:34:50.764508619Z" level=info msg="shim disconnected" id=bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7 namespace=k8s.io Feb 13 15:34:50.764613 containerd[1485]: time="2025-02-13T15:34:50.764540910Z" level=warning msg="cleaning up after shim disconnected" id=bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7 namespace=k8s.io Feb 13 15:34:50.764613 containerd[1485]: time="2025-02-13T15:34:50.764549767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:50.764910 containerd[1485]: time="2025-02-13T15:34:50.764520292Z" level=warning msg="cleaning up after shim disconnected" id=f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5 namespace=k8s.io Feb 13 15:34:50.764910 containerd[1485]: time="2025-02-13T15:34:50.764781949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:50.801972 containerd[1485]: time="2025-02-13T15:34:50.801908870Z" level=info msg="StopContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" returns successfully" Feb 13 15:34:50.829892 containerd[1485]: time="2025-02-13T15:34:50.829856185Z" level=info msg="StopPodSandbox for \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\"" Feb 13 15:34:50.835424 containerd[1485]: time="2025-02-13T15:34:50.835393281Z" level=info msg="StopContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" returns successfully" Feb 13 15:34:50.835771 containerd[1485]: time="2025-02-13T15:34:50.835746664Z" level=info msg="StopPodSandbox for \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\"" Feb 13 15:34:50.835823 containerd[1485]: time="2025-02-13T15:34:50.835771612Z" level=info msg="Container to stop \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.835823 containerd[1485]: time="2025-02-13T15:34:50.835799896Z" level=info msg="Container to stop \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.835823 containerd[1485]: time="2025-02-13T15:34:50.835808712Z" level=info msg="Container to stop \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.835823 containerd[1485]: time="2025-02-13T15:34:50.835816386Z" level=info msg="Container to stop \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.835950 containerd[1485]: time="2025-02-13T15:34:50.835829422Z" level=info msg="Container to stop \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.836876 containerd[1485]: time="2025-02-13T15:34:50.829893015Z" level=info msg="Container to stop \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:50.838096 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871-shm.mount: Deactivated successfully. Feb 13 15:34:50.841826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94-shm.mount: Deactivated successfully. Feb 13 15:34:50.843153 systemd[1]: cri-containerd-25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871.scope: Deactivated successfully. Feb 13 15:34:50.844922 systemd[1]: cri-containerd-b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94.scope: Deactivated successfully. Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990623031Z" level=info msg="shim disconnected" id=25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871 namespace=k8s.io Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990680049Z" level=warning msg="cleaning up after shim disconnected" id=25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871 namespace=k8s.io Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990690980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990708222Z" level=info msg="shim disconnected" id=b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94 namespace=k8s.io Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990731748Z" level=warning msg="cleaning up after shim disconnected" id=b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94 namespace=k8s.io Feb 13 15:34:50.990748 containerd[1485]: time="2025-02-13T15:34:50.990739983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:51.005965 containerd[1485]: time="2025-02-13T15:34:51.005923407Z" level=info msg="TearDown network for sandbox \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\" successfully" Feb 13 15:34:51.005965 containerd[1485]: time="2025-02-13T15:34:51.005956460Z" level=info msg="StopPodSandbox for \"b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94\" returns successfully" Feb 13 15:34:51.007813 containerd[1485]: time="2025-02-13T15:34:51.007785752Z" level=info msg="TearDown network for sandbox \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" successfully" Feb 13 15:34:51.007813 containerd[1485]: time="2025-02-13T15:34:51.007810269Z" level=info msg="StopPodSandbox for \"25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871\" returns successfully" Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.101917 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hubble-tls\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.101964 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-xtables-lock\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.101989 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42c96cf-3857-4e91-a31c-4b24632ca1ea-cilium-config-path\") pod \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\" (UID: \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\") " Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.102011 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-net\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.102028 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-run\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.102580 kubelet[2671]: I0213 15:34:51.102044 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hostproc\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102077 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hktn\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-kube-api-access-8hktn\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102095 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-kernel\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102115 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-etc-cni-netd\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102133 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-config-path\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102151 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-bpf-maps\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103126 kubelet[2671]: I0213 15:34:51.102186 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21cb29ba-ee78-45df-a9ab-80ef16c632c3-clustermesh-secrets\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103279 kubelet[2671]: I0213 15:34:51.102202 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cni-path\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103279 kubelet[2671]: I0213 15:34:51.102221 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-lib-modules\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103279 kubelet[2671]: I0213 15:34:51.102240 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-cgroup\") pod \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\" (UID: \"21cb29ba-ee78-45df-a9ab-80ef16c632c3\") " Feb 13 15:34:51.103279 kubelet[2671]: I0213 15:34:51.102258 2671 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/e42c96cf-3857-4e91-a31c-4b24632ca1ea-kube-api-access-w7l9c\") pod \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\" (UID: \"e42c96cf-3857-4e91-a31c-4b24632ca1ea\") " Feb 13 15:34:51.103279 kubelet[2671]: I0213 15:34:51.102689 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.103455 kubelet[2671]: I0213 15:34:51.102770 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.103455 kubelet[2671]: I0213 15:34:51.102791 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.103455 kubelet[2671]: I0213 15:34:51.102809 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.103455 kubelet[2671]: I0213 15:34:51.103213 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.106237 kubelet[2671]: I0213 15:34:51.106170 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.106379 kubelet[2671]: I0213 15:34:51.106278 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cb29ba-ee78-45df-a9ab-80ef16c632c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:34:51.106379 kubelet[2671]: I0213 15:34:51.106353 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:51.106379 kubelet[2671]: I0213 15:34:51.106377 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.106486 kubelet[2671]: I0213 15:34:51.106395 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.106486 kubelet[2671]: I0213 15:34:51.106414 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.106486 kubelet[2671]: I0213 15:34:51.106434 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:51.107546 kubelet[2671]: I0213 15:34:51.107482 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-kube-api-access-8hktn" (OuterVolumeSpecName: "kube-api-access-8hktn") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "kube-api-access-8hktn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:51.108049 kubelet[2671]: I0213 15:34:51.108009 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42c96cf-3857-4e91-a31c-4b24632ca1ea-kube-api-access-w7l9c" (OuterVolumeSpecName: "kube-api-access-w7l9c") pod "e42c96cf-3857-4e91-a31c-4b24632ca1ea" (UID: "e42c96cf-3857-4e91-a31c-4b24632ca1ea"). InnerVolumeSpecName "kube-api-access-w7l9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:51.108853 kubelet[2671]: I0213 15:34:51.108813 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c96cf-3857-4e91-a31c-4b24632ca1ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e42c96cf-3857-4e91-a31c-4b24632ca1ea" (UID: "e42c96cf-3857-4e91-a31c-4b24632ca1ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:51.109190 kubelet[2671]: I0213 15:34:51.109134 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21cb29ba-ee78-45df-a9ab-80ef16c632c3" (UID: "21cb29ba-ee78-45df-a9ab-80ef16c632c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:51.202444 kubelet[2671]: I0213 15:34:51.202384 2671 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21cb29ba-ee78-45df-a9ab-80ef16c632c3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202444 kubelet[2671]: I0213 15:34:51.202427 2671 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202444 kubelet[2671]: I0213 15:34:51.202441 2671 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202444 kubelet[2671]: I0213 15:34:51.202451 2671 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202444 kubelet[2671]: I0213 15:34:51.202462 2671 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/e42c96cf-3857-4e91-a31c-4b24632ca1ea-kube-api-access-w7l9c\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202472 2671 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202482 2671 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202492 2671 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42c96cf-3857-4e91-a31c-4b24632ca1ea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202501 2671 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202510 2671 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202519 2671 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202529 2671 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202711 kubelet[2671]: I0213 15:34:51.202538 2671 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8hktn\" (UniqueName: \"kubernetes.io/projected/21cb29ba-ee78-45df-a9ab-80ef16c632c3-kube-api-access-8hktn\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202897 kubelet[2671]: I0213 15:34:51.202548 2671 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202897 kubelet[2671]: I0213 15:34:51.202557 2671 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21cb29ba-ee78-45df-a9ab-80ef16c632c3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.202897 kubelet[2671]: I0213 15:34:51.202568 2671 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21cb29ba-ee78-45df-a9ab-80ef16c632c3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:34:51.485287 kubelet[2671]: I0213 15:34:51.485252 2671 scope.go:117] "RemoveContainer" containerID="f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5" Feb 13 15:34:51.491491 containerd[1485]: time="2025-02-13T15:34:51.491146738Z" level=info msg="RemoveContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\"" Feb 13 15:34:51.493080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b816a4762dfe3b62ae7dd44371821f6099bd92cf8374fc3d645d596706f31f94-rootfs.mount: Deactivated successfully. Feb 13 15:34:51.493227 systemd[1]: var-lib-kubelet-pods-e42c96cf\x2d3857\x2d4e91\x2da31c\x2d4b24632ca1ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7l9c.mount: Deactivated successfully. Feb 13 15:34:51.493336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25cde95078504dafcd8fafddf3e8de022c6aab10132fb71878454985e0393871-rootfs.mount: Deactivated successfully. Feb 13 15:34:51.493431 systemd[1]: var-lib-kubelet-pods-21cb29ba\x2dee78\x2d45df\x2da9ab\x2d80ef16c632c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hktn.mount: Deactivated successfully. Feb 13 15:34:51.493525 systemd[1]: var-lib-kubelet-pods-21cb29ba\x2dee78\x2d45df\x2da9ab\x2d80ef16c632c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:34:51.493600 systemd[1]: var-lib-kubelet-pods-21cb29ba\x2dee78\x2d45df\x2da9ab\x2d80ef16c632c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:34:51.495953 systemd[1]: Removed slice kubepods-burstable-pod21cb29ba_ee78_45df_a9ab_80ef16c632c3.slice - libcontainer container kubepods-burstable-pod21cb29ba_ee78_45df_a9ab_80ef16c632c3.slice. Feb 13 15:34:51.496196 systemd[1]: kubepods-burstable-pod21cb29ba_ee78_45df_a9ab_80ef16c632c3.slice: Consumed 6.916s CPU time. Feb 13 15:34:51.497240 systemd[1]: Removed slice kubepods-besteffort-pode42c96cf_3857_4e91_a31c_4b24632ca1ea.slice - libcontainer container kubepods-besteffort-pode42c96cf_3857_4e91_a31c_4b24632ca1ea.slice. Feb 13 15:34:51.560392 containerd[1485]: time="2025-02-13T15:34:51.560328828Z" level=info msg="RemoveContainer for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" returns successfully" Feb 13 15:34:51.560735 kubelet[2671]: I0213 15:34:51.560673 2671 scope.go:117] "RemoveContainer" containerID="faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc" Feb 13 15:34:51.561835 containerd[1485]: time="2025-02-13T15:34:51.561797113Z" level=info msg="RemoveContainer for \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\"" Feb 13 15:34:51.667194 containerd[1485]: time="2025-02-13T15:34:51.667130389Z" level=info msg="RemoveContainer for \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\" returns successfully" Feb 13 15:34:51.667447 kubelet[2671]: I0213 15:34:51.667406 2671 scope.go:117] "RemoveContainer" containerID="1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394" Feb 13 15:34:51.668797 containerd[1485]: time="2025-02-13T15:34:51.668756565Z" level=info msg="RemoveContainer for \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\"" Feb 13 15:34:51.747881 containerd[1485]: time="2025-02-13T15:34:51.747732811Z" level=info msg="RemoveContainer for \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\" returns successfully" Feb 13 15:34:51.748137 kubelet[2671]: I0213 15:34:51.748096 2671 scope.go:117] "RemoveContainer" containerID="2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712" Feb 13 15:34:51.749603 containerd[1485]: time="2025-02-13T15:34:51.749571753Z" level=info msg="RemoveContainer for \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\"" Feb 13 15:34:51.851087 containerd[1485]: time="2025-02-13T15:34:51.850956822Z" level=info msg="RemoveContainer for \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\" returns successfully" Feb 13 15:34:51.851342 kubelet[2671]: I0213 15:34:51.851289 2671 scope.go:117] "RemoveContainer" containerID="61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c" Feb 13 15:34:51.852943 containerd[1485]: time="2025-02-13T15:34:51.852890875Z" level=info msg="RemoveContainer for \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\"" Feb 13 15:34:51.877158 containerd[1485]: time="2025-02-13T15:34:51.877108710Z" level=info msg="RemoveContainer for \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\" returns successfully" Feb 13 15:34:51.877470 kubelet[2671]: I0213 15:34:51.877436 2671 scope.go:117] "RemoveContainer" containerID="f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5" Feb 13 15:34:51.877762 containerd[1485]: time="2025-02-13T15:34:51.877716066Z" level=error msg="ContainerStatus for \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\": not found" Feb 13 15:34:51.884461 kubelet[2671]: E0213 15:34:51.884429 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\": not found" containerID="f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5" Feb 13 15:34:51.884542 kubelet[2671]: I0213 15:34:51.884518 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5"} err="failed to get container status \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f18727edfe25fcb7f0da9b05b40e72ee8d027e1f7499d3c49d66cfc5fc4e55d5\": not found" Feb 13 15:34:51.884542 kubelet[2671]: I0213 15:34:51.884538 2671 scope.go:117] "RemoveContainer" containerID="faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc" Feb 13 15:34:51.891333 containerd[1485]: time="2025-02-13T15:34:51.891285521Z" level=error msg="ContainerStatus for \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\": not found" Feb 13 15:34:51.891431 kubelet[2671]: E0213 15:34:51.891404 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\": not found" containerID="faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc" Feb 13 15:34:51.891495 kubelet[2671]: I0213 15:34:51.891437 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc"} err="failed to get container status \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\": rpc error: code = NotFound desc = an error occurred when try to find container \"faac9a5f67ed3d91027195af3a8a57dd7cd2bdbc9734119c85870186207cdbcc\": not found" Feb 13 15:34:51.891495 kubelet[2671]: I0213 15:34:51.891449 2671 scope.go:117] "RemoveContainer" containerID="1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394" Feb 13 15:34:51.891649 containerd[1485]: time="2025-02-13T15:34:51.891620919Z" level=error msg="ContainerStatus for \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\": not found" Feb 13 15:34:51.891723 kubelet[2671]: E0213 15:34:51.891705 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\": not found" containerID="1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394" Feb 13 15:34:51.891765 kubelet[2671]: I0213 15:34:51.891739 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394"} err="failed to get container status \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e06bb674c6c673a69670d264959b4aed26d93e54df3fb5834a13ee45befe394\": not found" Feb 13 15:34:51.891765 kubelet[2671]: I0213 15:34:51.891750 2671 scope.go:117] "RemoveContainer" containerID="2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712" Feb 13 15:34:51.891910 containerd[1485]: time="2025-02-13T15:34:51.891881346Z" level=error msg="ContainerStatus for \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\": not found" Feb 13 15:34:51.892045 kubelet[2671]: E0213 15:34:51.892014 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\": not found" containerID="2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712" Feb 13 15:34:51.892045 kubelet[2671]: I0213 15:34:51.892043 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712"} err="failed to get container status \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\": rpc error: code = NotFound desc = an error occurred when try to find container \"2726bef411777a69c39ffe9cf74f7ce9e48997296b5be38aed03fff3da8dd712\": not found" Feb 13 15:34:51.892136 kubelet[2671]: I0213 15:34:51.892054 2671 scope.go:117] "RemoveContainer" containerID="61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c" Feb 13 15:34:51.892262 containerd[1485]: time="2025-02-13T15:34:51.892228916Z" level=error msg="ContainerStatus for \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\": not found" Feb 13 15:34:51.892375 kubelet[2671]: E0213 15:34:51.892357 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\": not found" containerID="61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c" Feb 13 15:34:51.892406 kubelet[2671]: I0213 15:34:51.892386 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c"} err="failed to get container status \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\": rpc error: code = NotFound desc = an error occurred when try to find container \"61dc041fb9a3c43b0bc1f88a2cbb84dac4e830d6d415f534e908396bfa8e836c\": not found" Feb 13 15:34:51.892406 kubelet[2671]: I0213 15:34:51.892399 2671 scope.go:117] "RemoveContainer" containerID="bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7" Feb 13 15:34:51.893336 containerd[1485]: time="2025-02-13T15:34:51.893307089Z" level=info msg="RemoveContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\"" Feb 13 15:34:51.979136 containerd[1485]: time="2025-02-13T15:34:51.979082982Z" level=info msg="RemoveContainer for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" returns successfully" Feb 13 15:34:51.979426 kubelet[2671]: I0213 15:34:51.979391 2671 scope.go:117] "RemoveContainer" containerID="bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7" Feb 13 15:34:51.979810 containerd[1485]: time="2025-02-13T15:34:51.979754651Z" level=error msg="ContainerStatus for \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\": not found" Feb 13 15:34:51.979953 kubelet[2671]: E0213 15:34:51.979932 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\": not found" containerID="bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7" Feb 13 15:34:51.979996 kubelet[2671]: I0213 15:34:51.979972 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7"} err="failed to get container status \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb7666d32fefeeeff5bc37b0932413d6e177a16c90425b6770ba2e71fcff41e7\": not found" Feb 13 15:34:52.281119 kubelet[2671]: I0213 15:34:52.281056 2671 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" path="/var/lib/kubelet/pods/21cb29ba-ee78-45df-a9ab-80ef16c632c3/volumes" Feb 13 15:34:52.282097 kubelet[2671]: I0213 15:34:52.282054 2671 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e42c96cf-3857-4e91-a31c-4b24632ca1ea" path="/var/lib/kubelet/pods/e42c96cf-3857-4e91-a31c-4b24632ca1ea/volumes" Feb 13 15:34:52.488298 sshd[4336]: Connection closed by 10.0.0.1 port 33790 Feb 13 15:34:52.488844 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:52.499895 systemd[1]: sshd@26-10.0.0.112:22-10.0.0.1:33790.service: Deactivated successfully. Feb 13 15:34:52.502103 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:34:52.503787 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:34:52.514517 systemd[1]: Started sshd@27-10.0.0.112:22-10.0.0.1:33806.service - OpenSSH per-connection server daemon (10.0.0.1:33806). Feb 13 15:34:52.515482 systemd-logind[1469]: Removed session 27. Feb 13 15:34:52.557143 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 33806 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:52.559358 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:52.563928 systemd-logind[1469]: New session 28 of user core. Feb 13 15:34:52.572312 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:34:53.428326 sshd[4501]: Connection closed by 10.0.0.1 port 33806 Feb 13 15:34:53.428661 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:53.438761 systemd[1]: sshd@27-10.0.0.112:22-10.0.0.1:33806.service: Deactivated successfully. Feb 13 15:34:53.441643 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:34:53.442804 kubelet[2671]: I0213 15:34:53.442763 2671 topology_manager.go:215] "Topology Admit Handler" podUID="b99f6256-4bff-4be1-83c4-4686a2c6527d" podNamespace="kube-system" podName="cilium-886kr" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442849 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="apply-sysctl-overwrites" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442863 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="clean-cilium-state" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442871 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="cilium-agent" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442879 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e42c96cf-3857-4e91-a31c-4b24632ca1ea" containerName="cilium-operator" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442888 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="mount-cgroup" Feb 13 15:34:53.443214 kubelet[2671]: E0213 15:34:53.442898 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="mount-bpf-fs" Feb 13 15:34:53.443214 kubelet[2671]: I0213 15:34:53.442925 2671 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cb29ba-ee78-45df-a9ab-80ef16c632c3" containerName="cilium-agent" Feb 13 15:34:53.443214 kubelet[2671]: I0213 15:34:53.442938 2671 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42c96cf-3857-4e91-a31c-4b24632ca1ea" containerName="cilium-operator" Feb 13 15:34:53.445604 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:34:53.454544 systemd[1]: Started sshd@28-10.0.0.112:22-10.0.0.1:33814.service - OpenSSH per-connection server daemon (10.0.0.1:33814). Feb 13 15:34:53.457954 systemd-logind[1469]: Removed session 28. Feb 13 15:34:53.465515 systemd[1]: Created slice kubepods-burstable-podb99f6256_4bff_4be1_83c4_4686a2c6527d.slice - libcontainer container kubepods-burstable-podb99f6256_4bff_4be1_83c4_4686a2c6527d.slice. Feb 13 15:34:53.500735 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 33814 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:53.502495 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:53.507196 systemd-logind[1469]: New session 29 of user core. Feb 13 15:34:53.514434 kubelet[2671]: I0213 15:34:53.514406 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-cilium-run\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514525 kubelet[2671]: I0213 15:34:53.514444 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-lib-modules\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514525 kubelet[2671]: I0213 15:34:53.514464 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-host-proc-sys-net\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514525 kubelet[2671]: I0213 15:34:53.514484 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx94\" (UniqueName: \"kubernetes.io/projected/b99f6256-4bff-4be1-83c4-4686a2c6527d-kube-api-access-jxx94\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514643 kubelet[2671]: I0213 15:34:53.514609 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-xtables-lock\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514822 kubelet[2671]: I0213 15:34:53.514801 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b99f6256-4bff-4be1-83c4-4686a2c6527d-cilium-config-path\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514881 kubelet[2671]: I0213 15:34:53.514841 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b99f6256-4bff-4be1-83c4-4686a2c6527d-clustermesh-secrets\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514881 kubelet[2671]: I0213 15:34:53.514864 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b99f6256-4bff-4be1-83c4-4686a2c6527d-cilium-ipsec-secrets\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514931 kubelet[2671]: I0213 15:34:53.514883 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b99f6256-4bff-4be1-83c4-4686a2c6527d-hubble-tls\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514931 kubelet[2671]: I0213 15:34:53.514910 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-etc-cni-netd\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514986 kubelet[2671]: I0213 15:34:53.514935 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-host-proc-sys-kernel\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.514986 kubelet[2671]: I0213 15:34:53.514962 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-bpf-maps\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.515026 kubelet[2671]: I0213 15:34:53.514987 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-hostproc\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.515026 kubelet[2671]: I0213 15:34:53.515014 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-cni-path\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.515093 kubelet[2671]: I0213 15:34:53.515042 2671 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b99f6256-4bff-4be1-83c4-4686a2c6527d-cilium-cgroup\") pod \"cilium-886kr\" (UID: \"b99f6256-4bff-4be1-83c4-4686a2c6527d\") " pod="kube-system/cilium-886kr" Feb 13 15:34:53.518244 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:34:53.570624 sshd[4516]: Connection closed by 10.0.0.1 port 33814 Feb 13 15:34:53.570972 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:53.582125 systemd[1]: sshd@28-10.0.0.112:22-10.0.0.1:33814.service: Deactivated successfully. Feb 13 15:34:53.583912 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:34:53.585507 systemd-logind[1469]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:34:53.586929 systemd[1]: Started sshd@29-10.0.0.112:22-10.0.0.1:33816.service - OpenSSH per-connection server daemon (10.0.0.1:33816). Feb 13 15:34:53.587860 systemd-logind[1469]: Removed session 29. Feb 13 15:34:53.633913 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 33816 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:53.635791 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:53.640565 systemd-logind[1469]: New session 30 of user core. Feb 13 15:34:53.654375 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:34:53.776291 kubelet[2671]: E0213 15:34:53.776234 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.777011 containerd[1485]: time="2025-02-13T15:34:53.776957857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-886kr,Uid:b99f6256-4bff-4be1-83c4-4686a2c6527d,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:53.803606 containerd[1485]: time="2025-02-13T15:34:53.803336322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:53.803606 containerd[1485]: time="2025-02-13T15:34:53.803412457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:53.803606 containerd[1485]: time="2025-02-13T15:34:53.803427867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:53.803606 containerd[1485]: time="2025-02-13T15:34:53.803519861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:53.825375 systemd[1]: Started cri-containerd-23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc.scope - libcontainer container 23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc. Feb 13 15:34:53.850018 containerd[1485]: time="2025-02-13T15:34:53.849966304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-886kr,Uid:b99f6256-4bff-4be1-83c4-4686a2c6527d,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\"" Feb 13 15:34:53.851089 kubelet[2671]: E0213 15:34:53.851044 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.853540 containerd[1485]: time="2025-02-13T15:34:53.853482354Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:34:53.878208 containerd[1485]: time="2025-02-13T15:34:53.878131339Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686\"" Feb 13 15:34:53.878758 containerd[1485]: time="2025-02-13T15:34:53.878733794Z" level=info msg="StartContainer for \"ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686\"" Feb 13 15:34:53.923372 systemd[1]: Started cri-containerd-ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686.scope - libcontainer container ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686. Feb 13 15:34:53.950506 containerd[1485]: time="2025-02-13T15:34:53.950446667Z" level=info msg="StartContainer for \"ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686\" returns successfully" Feb 13 15:34:53.961728 systemd[1]: cri-containerd-ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686.scope: Deactivated successfully. Feb 13 15:34:53.994028 containerd[1485]: time="2025-02-13T15:34:53.993968196Z" level=info msg="shim disconnected" id=ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686 namespace=k8s.io Feb 13 15:34:53.994028 containerd[1485]: time="2025-02-13T15:34:53.994019603Z" level=warning msg="cleaning up after shim disconnected" id=ac1630ba33ab2cf041acd01ff9f6b9d6b8a33508f5155004f37bb1cba7630686 namespace=k8s.io Feb 13 15:34:53.994028 containerd[1485]: time="2025-02-13T15:34:53.994027949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:54.340045 kubelet[2671]: E0213 15:34:54.340003 2671 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:34:54.497566 kubelet[2671]: E0213 15:34:54.497537 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:54.499855 containerd[1485]: time="2025-02-13T15:34:54.499809209Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:34:54.534622 containerd[1485]: time="2025-02-13T15:34:54.534538659Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb\"" Feb 13 15:34:54.535223 containerd[1485]: time="2025-02-13T15:34:54.535167355Z" level=info msg="StartContainer for \"ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb\"" Feb 13 15:34:54.566263 systemd[1]: Started cri-containerd-ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb.scope - libcontainer container ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb. Feb 13 15:34:54.594317 containerd[1485]: time="2025-02-13T15:34:54.594197811Z" level=info msg="StartContainer for \"ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb\" returns successfully" Feb 13 15:34:54.600014 systemd[1]: cri-containerd-ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb.scope: Deactivated successfully. Feb 13 15:34:54.624292 containerd[1485]: time="2025-02-13T15:34:54.624223525Z" level=info msg="shim disconnected" id=ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb namespace=k8s.io Feb 13 15:34:54.624292 containerd[1485]: time="2025-02-13T15:34:54.624290132Z" level=warning msg="cleaning up after shim disconnected" id=ca452c29f98f210256f3ae35f152ec66bbb897e8907bf149a3a7a98bae6779fb namespace=k8s.io Feb 13 15:34:54.624546 containerd[1485]: time="2025-02-13T15:34:54.624301343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:55.500779 kubelet[2671]: E0213 15:34:55.500732 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:55.502689 containerd[1485]: time="2025-02-13T15:34:55.502648741Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:34:55.641779 containerd[1485]: time="2025-02-13T15:34:55.641735185Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b\"" Feb 13 15:34:55.642370 containerd[1485]: time="2025-02-13T15:34:55.642305670Z" level=info msg="StartContainer for \"f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b\"" Feb 13 15:34:55.673201 systemd[1]: Started cri-containerd-f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b.scope - libcontainer container f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b. Feb 13 15:34:55.733153 systemd[1]: cri-containerd-f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b.scope: Deactivated successfully. Feb 13 15:34:55.741204 containerd[1485]: time="2025-02-13T15:34:55.741147510Z" level=info msg="StartContainer for \"f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b\" returns successfully" Feb 13 15:34:55.761433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b-rootfs.mount: Deactivated successfully. Feb 13 15:34:55.773510 kubelet[2671]: I0213 15:34:55.773453 2671 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:34:55Z","lastTransitionTime":"2025-02-13T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:34:55.885347 containerd[1485]: time="2025-02-13T15:34:55.885275329Z" level=info msg="shim disconnected" id=f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b namespace=k8s.io Feb 13 15:34:55.885347 containerd[1485]: time="2025-02-13T15:34:55.885336335Z" level=warning msg="cleaning up after shim disconnected" id=f524c35ef6adb2cee7dc15e0328a322e8ab814146b2a7bf9cc650248eb549e5b namespace=k8s.io Feb 13 15:34:55.885347 containerd[1485]: time="2025-02-13T15:34:55.885347276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:56.504173 kubelet[2671]: E0213 15:34:56.504142 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:56.505704 containerd[1485]: time="2025-02-13T15:34:56.505668681Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:34:57.102341 containerd[1485]: time="2025-02-13T15:34:57.102278843Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0\"" Feb 13 15:34:57.102903 containerd[1485]: time="2025-02-13T15:34:57.102859055Z" level=info msg="StartContainer for \"e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0\"" Feb 13 15:34:57.132239 systemd[1]: Started cri-containerd-e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0.scope - libcontainer container e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0. Feb 13 15:34:57.158400 systemd[1]: cri-containerd-e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0.scope: Deactivated successfully. Feb 13 15:34:57.172538 containerd[1485]: time="2025-02-13T15:34:57.172479026Z" level=info msg="StartContainer for \"e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0\" returns successfully" Feb 13 15:34:57.507139 kubelet[2671]: E0213 15:34:57.507109 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:57.522957 containerd[1485]: time="2025-02-13T15:34:57.522830237Z" level=info msg="shim disconnected" id=e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0 namespace=k8s.io Feb 13 15:34:57.522957 containerd[1485]: time="2025-02-13T15:34:57.522953801Z" level=warning msg="cleaning up after shim disconnected" id=e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0 namespace=k8s.io Feb 13 15:34:57.522957 containerd[1485]: time="2025-02-13T15:34:57.522965094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:57.831695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e964a21586566d8fe2f7af154060641dd59d46f236fa240ebf27be9aefbe2fb0-rootfs.mount: Deactivated successfully. Feb 13 15:34:58.514038 kubelet[2671]: E0213 15:34:58.514007 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:58.516020 containerd[1485]: time="2025-02-13T15:34:58.515977595Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:34:59.091090 containerd[1485]: time="2025-02-13T15:34:59.090990728Z" level=info msg="CreateContainer within sandbox \"23a4090f864c14ab1ccbff2944d96afb5c518b8ad3442021e4e8e2e6498e4fdc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4\"" Feb 13 15:34:59.091642 containerd[1485]: time="2025-02-13T15:34:59.091570760Z" level=info msg="StartContainer for \"dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4\"" Feb 13 15:34:59.126260 systemd[1]: Started cri-containerd-dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4.scope - libcontainer container dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4. Feb 13 15:34:59.228053 containerd[1485]: time="2025-02-13T15:34:59.227987935Z" level=info msg="StartContainer for \"dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4\" returns successfully" Feb 13 15:34:59.518444 kubelet[2671]: E0213 15:34:59.518413 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:59.632125 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:35:00.276775 systemd[1]: run-containerd-runc-k8s.io-dbf966b4bc2e035d9b97cab3097ece8e0170fe3806e4ea6827648dcc1157a3d4-runc.n0LNVP.mount: Deactivated successfully. Feb 13 15:35:00.520398 kubelet[2671]: E0213 15:35:00.520366 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:01.278018 kubelet[2671]: E0213 15:35:01.277966 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:02.829823 systemd-networkd[1400]: lxc_health: Link UP Feb 13 15:35:02.845591 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 15:35:03.779769 kubelet[2671]: E0213 15:35:03.779714 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:03.801715 kubelet[2671]: I0213 15:35:03.801665 2671 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-886kr" podStartSLOduration=10.801622457 podStartE2EDuration="10.801622457s" podCreationTimestamp="2025-02-13 15:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:59.676939593 +0000 UTC m=+95.478051698" watchObservedRunningTime="2025-02-13 15:35:03.801622457 +0000 UTC m=+99.602734552" Feb 13 15:35:04.277971 kubelet[2671]: E0213 15:35:04.277933 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:04.530845 kubelet[2671]: E0213 15:35:04.530564 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:04.601163 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 15:35:05.278131 kubelet[2671]: E0213 15:35:05.278051 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:05.531877 kubelet[2671]: E0213 15:35:05.531748 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:08.839153 sshd[4529]: Connection closed by 10.0.0.1 port 33816 Feb 13 15:35:08.839507 sshd-session[4523]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:08.843108 systemd[1]: sshd@29-10.0.0.112:22-10.0.0.1:33816.service: Deactivated successfully. Feb 13 15:35:08.845199 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:35:08.846039 systemd-logind[1469]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:35:08.846960 systemd-logind[1469]: Removed session 30.