Feb 13 15:47:45.866140 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:47:45.866180 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:47:45.866192 kernel: BIOS-provided physical RAM map: Feb 13 15:47:45.866199 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:47:45.866205 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:47:45.866211 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:47:45.866219 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:47:45.866225 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:47:45.866232 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:47:45.866240 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:47:45.866247 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:47:45.866253 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:47:45.866260 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:47:45.866266 kernel: NX (Execute Disable) protection: active Feb 13 15:47:45.866274 kernel: APIC: Static calls initialized Feb 13 15:47:45.866283 kernel: SMBIOS 2.8 present. Feb 13 15:47:45.866290 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:47:45.866297 kernel: Hypervisor detected: KVM Feb 13 15:47:45.866304 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:47:45.866311 kernel: kvm-clock: using sched offset of 2288351046 cycles Feb 13 15:47:45.866318 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:47:45.866325 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:47:45.866332 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:47:45.866340 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:47:45.866347 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:47:45.866356 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:47:45.866363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:47:45.866370 kernel: Using GB pages for direct mapping Feb 13 15:47:45.866377 kernel: ACPI: Early table checksum verification disabled Feb 13 15:47:45.866384 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:47:45.866392 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866399 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866413 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866420 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:47:45.866430 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866437 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866444 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866451 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:47:45.866458 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:47:45.866465 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:47:45.866476 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:47:45.866485 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:47:45.866493 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:47:45.866500 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:47:45.866507 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:47:45.866515 kernel: No NUMA configuration found Feb 13 15:47:45.866522 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:47:45.866529 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:47:45.866539 kernel: Zone ranges: Feb 13 15:47:45.866546 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:47:45.866554 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:47:45.866561 kernel: Normal empty Feb 13 15:47:45.866568 kernel: Movable zone start for each node Feb 13 15:47:45.866575 kernel: Early memory node ranges Feb 13 15:47:45.866583 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:47:45.866590 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:47:45.866597 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:47:45.866607 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:47:45.866614 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:47:45.866622 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:47:45.866629 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:47:45.866636 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:47:45.866644 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:47:45.866651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:47:45.866658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:47:45.866666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:47:45.866673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:47:45.866682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:47:45.866690 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:47:45.866697 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:47:45.866704 kernel: TSC deadline timer available Feb 13 15:47:45.866712 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:47:45.866719 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:47:45.866726 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:47:45.866734 kernel: kvm-guest: setup PV sched yield Feb 13 15:47:45.866741 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:47:45.866750 kernel: Booting paravirtualized kernel on KVM Feb 13 15:47:45.866758 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:47:45.866766 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:47:45.866773 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:47:45.866781 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:47:45.866788 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:47:45.866795 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:47:45.866802 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:47:45.866811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:47:45.866821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:47:45.866828 kernel: random: crng init done Feb 13 15:47:45.866836 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:47:45.866843 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:47:45.866851 kernel: Fallback order for Node 0: 0 Feb 13 15:47:45.866858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:47:45.866865 kernel: Policy zone: DMA32 Feb 13 15:47:45.866872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:47:45.866882 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 138948K reserved, 0K cma-reserved) Feb 13 15:47:45.866890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:47:45.866897 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:47:45.866904 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:47:45.866912 kernel: Dynamic Preempt: voluntary Feb 13 15:47:45.866919 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:47:45.866927 kernel: rcu: RCU event tracing is enabled. Feb 13 15:47:45.866934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:47:45.866942 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:47:45.866951 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:47:45.866960 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:47:45.866969 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:47:45.866978 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:47:45.866986 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:47:45.866994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:47:45.867001 kernel: Console: colour VGA+ 80x25 Feb 13 15:47:45.867008 kernel: printk: console [ttyS0] enabled Feb 13 15:47:45.867015 kernel: ACPI: Core revision 20230628 Feb 13 15:47:45.867025 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:47:45.867033 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:47:45.867040 kernel: x2apic enabled Feb 13 15:47:45.867047 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:47:45.867055 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:47:45.867062 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:47:45.867070 kernel: kvm-guest: setup PV IPIs Feb 13 15:47:45.867086 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:47:45.867094 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:47:45.867102 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:47:45.867109 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:47:45.867117 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:47:45.867126 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:47:45.867134 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:47:45.867142 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:47:45.867150 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:47:45.867175 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:47:45.867185 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:47:45.867192 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:47:45.867200 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:47:45.867208 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:47:45.867216 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:47:45.867224 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:47:45.867232 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:47:45.867239 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:47:45.867249 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:47:45.867257 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:47:45.867265 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:47:45.867272 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:47:45.867280 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:47:45.867288 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:47:45.867295 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:47:45.867303 kernel: landlock: Up and running. Feb 13 15:47:45.867310 kernel: SELinux: Initializing. Feb 13 15:47:45.867320 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:47:45.867328 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:47:45.867335 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:47:45.867343 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:47:45.867351 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:47:45.867359 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:47:45.867366 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:47:45.867374 kernel: ... version: 0 Feb 13 15:47:45.867381 kernel: ... bit width: 48 Feb 13 15:47:45.867391 kernel: ... generic registers: 6 Feb 13 15:47:45.867399 kernel: ... value mask: 0000ffffffffffff Feb 13 15:47:45.867412 kernel: ... max period: 00007fffffffffff Feb 13 15:47:45.867420 kernel: ... fixed-purpose events: 0 Feb 13 15:47:45.867427 kernel: ... event mask: 000000000000003f Feb 13 15:47:45.867435 kernel: signal: max sigframe size: 1776 Feb 13 15:47:45.867442 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:47:45.867450 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:47:45.867458 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:47:45.867468 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:47:45.867475 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:47:45.867483 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:47:45.867490 kernel: smpboot: Max logical packages: 1 Feb 13 15:47:45.867498 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:47:45.867506 kernel: devtmpfs: initialized Feb 13 15:47:45.867513 kernel: x86/mm: Memory block size: 128MB Feb 13 15:47:45.867521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:47:45.867529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:47:45.867538 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:47:45.867546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:47:45.867554 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:47:45.867561 kernel: audit: type=2000 audit(1739461665.378:1): state=initialized audit_enabled=0 res=1 Feb 13 15:47:45.867569 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:47:45.867577 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:47:45.867584 kernel: cpuidle: using governor menu Feb 13 15:47:45.867592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:47:45.867602 kernel: dca service started, version 1.12.1 Feb 13 15:47:45.867615 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:47:45.867623 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:47:45.867631 kernel: PCI: Using configuration type 1 for base access Feb 13 15:47:45.867638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:47:45.867646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:47:45.867654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:47:45.867661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:47:45.867669 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:47:45.867677 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:47:45.867686 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:47:45.867694 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:47:45.867702 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:47:45.867709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:47:45.867717 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:47:45.867724 kernel: ACPI: Interpreter enabled Feb 13 15:47:45.867732 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:47:45.867739 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:47:45.867747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:47:45.867757 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:47:45.867765 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:47:45.867772 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:47:45.867938 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:47:45.868066 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:47:45.868201 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:47:45.868212 kernel: PCI host bridge to bus 0000:00 Feb 13 15:47:45.868340 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:47:45.868462 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:47:45.868572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:47:45.868681 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:47:45.868789 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:47:45.868897 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:47:45.869005 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:47:45.869146 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:47:45.869305 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:47:45.869436 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:47:45.869556 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:47:45.869674 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:47:45.869793 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:47:45.869921 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:47:45.870051 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:47:45.870190 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:47:45.870312 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:47:45.870453 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:47:45.870574 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:47:45.870693 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:47:45.870812 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:47:45.870948 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:47:45.871068 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:47:45.871202 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:47:45.871323 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:47:45.871450 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:47:45.871579 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:47:45.871702 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:47:45.871829 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:47:45.871949 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:47:45.872068 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:47:45.872209 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:47:45.872330 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:47:45.872340 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:47:45.872352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:47:45.872360 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:47:45.872367 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:47:45.872375 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:47:45.872383 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:47:45.872390 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:47:45.872398 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:47:45.872412 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:47:45.872420 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:47:45.872430 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:47:45.872438 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:47:45.872445 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:47:45.872453 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:47:45.872460 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:47:45.872468 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:47:45.872476 kernel: iommu: Default domain type: Translated Feb 13 15:47:45.872484 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:47:45.872491 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:47:45.872501 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:47:45.872508 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:47:45.872516 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:47:45.872637 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:47:45.872757 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:47:45.872876 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:47:45.872886 kernel: vgaarb: loaded Feb 13 15:47:45.872894 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:47:45.872905 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:47:45.872913 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:47:45.872920 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:47:45.872928 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:47:45.872936 kernel: pnp: PnP ACPI init Feb 13 15:47:45.873066 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:47:45.873077 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:47:45.873085 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:47:45.873096 kernel: NET: Registered PF_INET protocol family Feb 13 15:47:45.873104 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:47:45.873111 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:47:45.873119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:47:45.873127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:47:45.873135 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:47:45.873142 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:47:45.873150 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:47:45.873170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:47:45.873180 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:47:45.873188 kernel: NET: Registered PF_XDP protocol family Feb 13 15:47:45.873301 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:47:45.873419 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:47:45.873531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:47:45.873639 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:47:45.873748 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:47:45.873856 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:47:45.873869 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:47:45.873877 kernel: Initialise system trusted keyrings Feb 13 15:47:45.873885 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:47:45.873893 kernel: Key type asymmetric registered Feb 13 15:47:45.873900 kernel: Asymmetric key parser 'x509' registered Feb 13 15:47:45.873908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:47:45.873916 kernel: io scheduler mq-deadline registered Feb 13 15:47:45.873923 kernel: io scheduler kyber registered Feb 13 15:47:45.873931 kernel: io scheduler bfq registered Feb 13 15:47:45.873938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:47:45.873949 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:47:45.873956 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:47:45.873964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:47:45.873972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:47:45.873979 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:47:45.873987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:47:45.873994 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:47:45.874002 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:47:45.874128 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:47:45.874142 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:47:45.874279 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:47:45.874394 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:47:45 UTC (1739461665) Feb 13 15:47:45.874516 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:47:45.874527 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:47:45.874535 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:47:45.874543 kernel: Segment Routing with IPv6 Feb 13 15:47:45.874555 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:47:45.874562 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:47:45.874570 kernel: Key type dns_resolver registered Feb 13 15:47:45.874577 kernel: IPI shorthand broadcast: enabled Feb 13 15:47:45.874585 kernel: sched_clock: Marking stable (564002273, 105028123)->(715326563, -46296167) Feb 13 15:47:45.874593 kernel: registered taskstats version 1 Feb 13 15:47:45.874601 kernel: Loading compiled-in X.509 certificates Feb 13 15:47:45.874608 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:47:45.874616 kernel: Key type .fscrypt registered Feb 13 15:47:45.874623 kernel: Key type fscrypt-provisioning registered Feb 13 15:47:45.874633 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:47:45.874641 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:47:45.874649 kernel: ima: No architecture policies found Feb 13 15:47:45.874656 kernel: clk: Disabling unused clocks Feb 13 15:47:45.874664 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:47:45.874671 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:47:45.874679 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:47:45.874687 kernel: Run /init as init process Feb 13 15:47:45.874696 kernel: with arguments: Feb 13 15:47:45.874704 kernel: /init Feb 13 15:47:45.874711 kernel: with environment: Feb 13 15:47:45.874718 kernel: HOME=/ Feb 13 15:47:45.874726 kernel: TERM=linux Feb 13 15:47:45.874733 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:47:45.874743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:47:45.874753 systemd[1]: Detected virtualization kvm. Feb 13 15:47:45.874764 systemd[1]: Detected architecture x86-64. Feb 13 15:47:45.874772 systemd[1]: Running in initrd. Feb 13 15:47:45.874780 systemd[1]: No hostname configured, using default hostname. Feb 13 15:47:45.874788 systemd[1]: Hostname set to . Feb 13 15:47:45.874797 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:47:45.874805 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:47:45.874813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:47:45.874821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:47:45.874833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:47:45.874852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:47:45.874863 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:47:45.874871 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:47:45.874881 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:47:45.874892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:47:45.874901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:47:45.874909 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:47:45.874918 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:47:45.874927 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:47:45.874937 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:47:45.874946 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:47:45.874956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:47:45.874967 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:47:45.874975 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:47:45.874984 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:47:45.874992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:47:45.875001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:47:45.875009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:47:45.875017 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:47:45.875026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:47:45.875034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:47:45.875045 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:47:45.875053 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:47:45.875062 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:47:45.875070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:47:45.875078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:47:45.875087 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:47:45.875095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:47:45.875104 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:47:45.875131 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:47:45.875152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:47:45.875176 systemd-journald[193]: Journal started Feb 13 15:47:45.875196 systemd-journald[193]: Runtime Journal (/run/log/journal/e95a935af155482cb0c53898310386bf) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:47:45.863846 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:47:45.898486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:47:45.898504 kernel: Bridge firewalling registered Feb 13 15:47:45.890334 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:47:45.900085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:47:45.900599 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:47:45.900607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:47:45.913553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:47:45.917552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:47:45.921705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:47:45.925424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:47:45.929069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:47:45.931959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:47:45.935749 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:47:45.937475 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:47:45.942116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:47:45.960345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:47:45.967493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:47:45.974317 systemd-resolved[219]: Positive Trust Anchors: Feb 13 15:47:45.974330 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:47:45.974360 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:47:45.976712 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 15:47:45.977764 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:47:45.985433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:47:45.992172 dracut-cmdline[229]: dracut-dracut-053 Feb 13 15:47:45.995957 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:47:46.086462 kernel: SCSI subsystem initialized Feb 13 15:47:46.095189 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:47:46.106201 kernel: iscsi: registered transport (tcp) Feb 13 15:47:46.126349 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:47:46.126394 kernel: QLogic iSCSI HBA Driver Feb 13 15:47:46.171393 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:47:46.190283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:47:46.214922 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:47:46.214998 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:47:46.215017 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:47:46.258205 kernel: raid6: avx2x4 gen() 19094 MB/s Feb 13 15:47:46.275183 kernel: raid6: avx2x2 gen() 23506 MB/s Feb 13 15:47:46.292279 kernel: raid6: avx2x1 gen() 25737 MB/s Feb 13 15:47:46.292302 kernel: raid6: using algorithm avx2x1 gen() 25737 MB/s Feb 13 15:47:46.310258 kernel: raid6: .... xor() 15766 MB/s, rmw enabled Feb 13 15:47:46.310276 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:47:46.331184 kernel: xor: automatically using best checksumming function avx Feb 13 15:47:46.477194 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:47:46.490497 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:47:46.507353 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:47:46.519686 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 15:47:46.524327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:47:46.539337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:47:46.553232 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Feb 13 15:47:46.587528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:47:46.595356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:47:46.657919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:47:46.665285 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:47:46.684427 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:47:46.686516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:47:46.695628 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:47:46.731467 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:47:46.731673 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:47:46.731689 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:47:46.731704 kernel: AES CTR mode by8 optimization enabled Feb 13 15:47:46.731717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:47:46.731732 kernel: GPT:9289727 != 19775487 Feb 13 15:47:46.731746 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:47:46.731760 kernel: GPT:9289727 != 19775487 Feb 13 15:47:46.731774 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:47:46.731791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:47:46.688932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:47:46.690763 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:47:46.726359 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:47:46.735144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:47:46.735266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:47:46.737918 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:47:46.739395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:47:46.739558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:47:46.741109 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:47:46.751646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:47:46.760200 kernel: libata version 3.00 loaded. Feb 13 15:47:46.769344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:47:46.774798 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Feb 13 15:47:46.774853 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:47:46.792041 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:47:46.792058 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:47:46.792337 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:47:46.792489 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) Feb 13 15:47:46.792500 kernel: scsi host0: ahci Feb 13 15:47:46.792648 kernel: scsi host1: ahci Feb 13 15:47:46.792793 kernel: scsi host2: ahci Feb 13 15:47:46.792973 kernel: scsi host3: ahci Feb 13 15:47:46.793123 kernel: scsi host4: ahci Feb 13 15:47:46.793279 kernel: scsi host5: ahci Feb 13 15:47:46.793428 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:47:46.793439 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:47:46.793450 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:47:46.793460 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:47:46.793474 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:47:46.793484 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:47:46.774829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:47:46.792952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:47:46.826262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:47:46.828873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:47:46.833892 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:47:46.833973 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:47:46.852320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:47:46.854359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:47:46.865535 disk-uuid[569]: Primary Header is updated. Feb 13 15:47:46.865535 disk-uuid[569]: Secondary Entries is updated. Feb 13 15:47:46.865535 disk-uuid[569]: Secondary Header is updated. Feb 13 15:47:46.870179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:47:46.871081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:47:46.875183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:47:47.100485 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:47:47.100571 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:47:47.100603 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:47:47.102188 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:47:47.102270 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:47:47.103547 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:47:47.105288 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:47:47.105340 kernel: ata3.00: applying bridge limits Feb 13 15:47:47.106717 kernel: ata3.00: configured for UDMA/100 Feb 13 15:47:47.109251 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:47:47.156420 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:47:47.171037 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:47:47.171063 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:47:47.882190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:47:47.882375 disk-uuid[575]: The operation has completed successfully. Feb 13 15:47:47.908117 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:47:47.908261 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:47:47.945302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:47:47.948998 sh[594]: Success Feb 13 15:47:47.962195 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:47:47.995198 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:47:48.005845 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:47:48.008585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:47:48.023676 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:47:48.023715 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:47:48.023727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:47:48.025096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:47:48.026897 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:47:48.031324 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:47:48.033996 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:47:48.043368 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:47:48.045257 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:47:48.057629 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:47:48.057652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:47:48.057663 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:47:48.061188 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:47:48.071366 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:47:48.073704 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:47:48.125530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:47:48.136336 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:47:48.178211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:47:48.208487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:47:48.242688 systemd-networkd[775]: lo: Link UP Feb 13 15:47:48.242700 systemd-networkd[775]: lo: Gained carrier Feb 13 15:47:48.244719 systemd-networkd[775]: Enumeration completed Feb 13 15:47:48.245213 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:47:48.245217 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:47:48.251049 systemd-networkd[775]: eth0: Link UP Feb 13 15:47:48.251058 systemd-networkd[775]: eth0: Gained carrier Feb 13 15:47:48.251071 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:47:48.259373 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:47:48.259622 systemd[1]: Reached target network.target - Network. Feb 13 15:47:48.289114 ignition[754]: Ignition 2.20.0 Feb 13 15:47:48.289127 ignition[754]: Stage: fetch-offline Feb 13 15:47:48.289183 ignition[754]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:48.289194 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:48.289303 ignition[754]: parsed url from cmdline: "" Feb 13 15:47:48.289308 ignition[754]: no config URL provided Feb 13 15:47:48.289313 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:47:48.289322 ignition[754]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:47:48.289365 ignition[754]: op(1): [started] loading QEMU firmware config module Feb 13 15:47:48.289370 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:47:48.297296 ignition[754]: op(1): [finished] loading QEMU firmware config module Feb 13 15:47:48.303288 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:47:48.338645 ignition[754]: parsing config with SHA512: 753a4954f167b10c70a14160d004ef9460647235c1cf4c626e12dfee04904b699b994e7709521c442ea3eed22c6523202a1c4cdae989866304a8673912ecf91f Feb 13 15:47:48.345086 unknown[754]: fetched base config from "system" Feb 13 15:47:48.345100 unknown[754]: fetched user config from "qemu" Feb 13 15:47:48.345495 ignition[754]: fetch-offline: fetch-offline passed Feb 13 15:47:48.345582 ignition[754]: Ignition finished successfully Feb 13 15:47:48.350703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:47:48.350986 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:47:48.366407 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:47:48.384219 ignition[787]: Ignition 2.20.0 Feb 13 15:47:48.384231 ignition[787]: Stage: kargs Feb 13 15:47:48.384401 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:48.384413 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:48.388114 ignition[787]: kargs: kargs passed Feb 13 15:47:48.388177 ignition[787]: Ignition finished successfully Feb 13 15:47:48.392858 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:47:48.407446 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:47:48.431463 ignition[796]: Ignition 2.20.0 Feb 13 15:47:48.431475 ignition[796]: Stage: disks Feb 13 15:47:48.431643 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:48.431657 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:48.432486 ignition[796]: disks: disks passed Feb 13 15:47:48.432533 ignition[796]: Ignition finished successfully Feb 13 15:47:48.438245 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:47:48.439483 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:47:48.441420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:47:48.442661 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:47:48.444675 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:47:48.445712 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:47:48.460373 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:47:48.476081 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:47:48.484291 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:47:48.497257 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:47:48.592182 kernel: EXT4-fs (vda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:47:48.592552 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:47:48.594007 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:47:48.606271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:47:48.608231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:47:48.610205 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:47:48.610263 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:47:48.610290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:47:48.619874 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:47:48.622131 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:47:48.631187 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Feb 13 15:47:48.634205 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:47:48.634242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:47:48.634253 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:47:48.638295 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:47:48.640789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:47:48.672672 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:47:48.677275 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:47:48.681096 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:47:48.685041 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:47:48.776033 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:47:48.786259 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:47:48.788177 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:47:48.795186 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:47:48.825974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:47:48.841663 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:47:48.841663 ignition[929]: INFO : Stage: mount Feb 13 15:47:48.843500 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:48.843500 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:48.843500 ignition[929]: INFO : mount: mount passed Feb 13 15:47:48.843500 ignition[929]: INFO : Ignition finished successfully Feb 13 15:47:48.844937 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:47:48.854307 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:47:49.022214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:47:49.035334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:47:49.043787 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 15:47:49.043836 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:47:49.044732 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:47:49.044761 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:47:49.048190 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:47:49.049205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:47:49.091309 ignition[959]: INFO : Ignition 2.20.0 Feb 13 15:47:49.091309 ignition[959]: INFO : Stage: files Feb 13 15:47:49.093181 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:49.093181 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:49.093181 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:47:49.093181 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:47:49.093181 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:47:49.100501 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:47:49.100501 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:47:49.100501 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:47:49.100501 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:47:49.100501 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:47:49.096087 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 15:47:49.171821 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:47:49.349444 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:47:49.349444 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:47:49.353589 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:47:49.492395 systemd-networkd[775]: eth0: Gained IPv6LL Feb 13 15:47:49.860913 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:47:50.136760 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:47:50.136760 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:47:50.141460 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:47:50.579613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:47:51.121769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:47:51.121769 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:47:51.126068 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:47:51.160513 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:47:51.165897 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:47:51.167487 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:47:51.167487 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:47:51.167487 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:47:51.167487 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:47:51.167487 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:47:51.167487 ignition[959]: INFO : files: files passed Feb 13 15:47:51.167487 ignition[959]: INFO : Ignition finished successfully Feb 13 15:47:51.168947 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:47:51.179456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:47:51.181667 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:47:51.183654 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:47:51.183778 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:47:51.192181 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:47:51.195090 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:47:51.195090 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:47:51.198569 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:47:51.201770 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:47:51.203466 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:47:51.211366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:47:51.240406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:47:51.240568 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:47:51.243700 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:47:51.246008 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:47:51.247322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:47:51.248328 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:47:51.269359 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:47:51.287460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:47:51.299513 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:47:51.301015 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:47:51.303546 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:47:51.305836 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:47:51.305974 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:47:51.308649 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:47:51.310417 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:47:51.312874 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:47:51.315302 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:47:51.317568 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:47:51.320030 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:47:51.322588 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:47:51.325012 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:47:51.327395 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:47:51.329796 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:47:51.331789 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:47:51.331997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:47:51.334505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:47:51.336134 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:47:51.338437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:47:51.338631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:47:51.340934 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:47:51.341093 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:47:51.343685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:47:51.343846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:47:51.345909 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:47:51.347990 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:47:51.352324 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:47:51.356508 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:47:51.358209 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:47:51.360256 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:47:51.360407 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:47:51.362607 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:47:51.362712 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:47:51.364505 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:47:51.364637 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:47:51.366559 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:47:51.366679 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:47:51.385342 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:47:51.387670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:47:51.387806 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:47:51.393966 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:47:51.396110 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:47:51.397391 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:47:51.399969 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:47:51.400084 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:47:51.406494 ignition[1014]: INFO : Ignition 2.20.0 Feb 13 15:47:51.406494 ignition[1014]: INFO : Stage: umount Feb 13 15:47:51.405801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:47:51.411251 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:47:51.411251 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:47:51.405920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:47:51.416215 ignition[1014]: INFO : umount: umount passed Feb 13 15:47:51.417110 ignition[1014]: INFO : Ignition finished successfully Feb 13 15:47:51.418644 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:47:51.418785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:47:51.420941 systemd[1]: Stopped target network.target - Network. Feb 13 15:47:51.422632 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:47:51.422687 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:47:51.424559 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:47:51.424606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:47:51.426915 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:47:51.426961 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:47:51.428962 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:47:51.429009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:47:51.431125 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:47:51.433438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:47:51.437055 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:47:51.437253 systemd-networkd[775]: eth0: DHCPv6 lease lost Feb 13 15:47:51.437768 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:47:51.437903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:47:51.440886 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:47:51.441034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:47:51.445119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:47:51.445206 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:47:51.451255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:47:51.452901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:47:51.452973 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:47:51.455319 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:47:51.455369 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:47:51.457370 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:47:51.457416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:47:51.459634 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:47:51.459684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:47:51.462324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:47:51.476317 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:47:51.476557 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:47:51.502609 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:47:51.502727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:47:51.505641 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:47:51.505707 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:47:51.507464 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:47:51.507506 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:47:51.509749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:47:51.509802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:47:51.512209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:47:51.512258 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:47:51.514450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:47:51.514496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:47:51.524407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:47:51.525597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:47:51.525665 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:47:51.528050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:47:51.528108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:47:51.531888 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:47:51.532009 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:47:51.813823 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:47:51.813968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:47:51.816493 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:47:51.817374 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:47:51.817425 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:47:51.832343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:47:51.841500 systemd[1]: Switching root. Feb 13 15:47:51.875327 systemd-journald[193]: Journal stopped Feb 13 15:47:53.744417 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:47:53.744480 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:47:53.744498 kernel: SELinux: policy capability open_perms=1 Feb 13 15:47:53.744510 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:47:53.744525 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:47:53.744536 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:47:53.744548 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:47:53.744559 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:47:53.744570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:47:53.744587 kernel: audit: type=1403 audit(1739461672.792:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:47:53.744603 systemd[1]: Successfully loaded SELinux policy in 45.706ms. Feb 13 15:47:53.744622 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.246ms. Feb 13 15:47:53.744636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:47:53.744650 systemd[1]: Detected virtualization kvm. Feb 13 15:47:53.744662 systemd[1]: Detected architecture x86-64. Feb 13 15:47:53.744674 systemd[1]: Detected first boot. Feb 13 15:47:53.744686 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:47:53.744698 zram_generator::config[1060]: No configuration found. Feb 13 15:47:53.744712 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:47:53.744724 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:47:53.744736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:47:53.744751 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:47:53.744764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:47:53.744776 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:47:53.744788 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:47:53.744800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:47:53.744812 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:47:53.744825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:47:53.744837 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:47:53.744851 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:47:53.744864 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:47:53.744876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:47:53.744889 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:47:53.744906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:47:53.744918 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:47:53.744931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:47:53.744943 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:47:53.744955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:47:53.744970 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:47:53.744982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:47:53.744998 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:47:53.745010 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:47:53.745023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:47:53.745035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:47:53.745047 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:47:53.745059 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:47:53.745077 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:47:53.745093 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:47:53.745108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:47:53.745122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:47:53.745134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:47:53.745146 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:47:53.745170 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:47:53.745183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:47:53.745196 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:47:53.745218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:53.745231 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:47:53.745243 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:47:53.745256 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:47:53.745269 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:47:53.745281 systemd[1]: Reached target machines.target - Containers. Feb 13 15:47:53.745293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:47:53.745305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:47:53.745320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:47:53.745332 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:47:53.745344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:47:53.745356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:47:53.745369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:47:53.745382 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:47:53.745394 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:47:53.745407 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:47:53.745419 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:47:53.745434 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:47:53.745446 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:47:53.745458 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:47:53.745470 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:47:53.745482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:47:53.745494 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:47:53.745507 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:47:53.745519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:47:53.745531 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:47:53.745546 systemd[1]: Stopped verity-setup.service. Feb 13 15:47:53.745558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:53.745571 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:47:53.745583 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:47:53.745595 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:47:53.745608 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:47:53.745620 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:47:53.745635 kernel: loop: module loaded Feb 13 15:47:53.745647 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:47:53.745659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:47:53.745671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:47:53.745683 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:47:53.745695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:47:53.745709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:47:53.745721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:47:53.745733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:47:53.745746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:47:53.745758 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:47:53.745770 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:47:53.745782 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:47:53.745794 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:47:53.745807 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:47:53.745824 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:47:53.745836 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:47:53.745848 kernel: fuse: init (API version 7.39) Feb 13 15:47:53.745877 systemd-journald[1123]: Collecting audit messages is disabled. Feb 13 15:47:53.745906 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:47:53.745918 systemd-journald[1123]: Journal started Feb 13 15:47:53.745941 systemd-journald[1123]: Runtime Journal (/run/log/journal/e95a935af155482cb0c53898310386bf) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:47:53.354646 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:47:53.373041 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:47:53.373490 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:47:53.789377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:47:53.795826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:47:53.799354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:47:53.801176 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:47:53.801218 kernel: ACPI: bus type drm_connector registered Feb 13 15:47:53.814190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:47:53.819222 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:47:53.820746 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:47:53.820979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:47:53.822899 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:47:53.823123 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:47:53.824883 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:47:53.825105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:47:53.826889 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:47:53.828512 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:47:53.830644 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:47:53.863497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:47:53.894360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:47:53.940267 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:47:53.941500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:47:53.944490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:47:53.947620 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 15:47:53.952290 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:47:53.956528 systemd-journald[1123]: Time spent on flushing to /var/log/journal/e95a935af155482cb0c53898310386bf is 14.071ms for 957 entries. Feb 13 15:47:53.956528 systemd-journald[1123]: System Journal (/var/log/journal/e95a935af155482cb0c53898310386bf) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:47:54.403329 systemd-journald[1123]: Received client request to flush runtime journal. Feb 13 15:47:54.403375 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:47:54.403390 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:47:54.403403 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 15:47:53.954742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:47:53.991534 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:47:54.001961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:47:54.389119 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:47:54.392088 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:47:54.399783 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:47:54.405906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:47:54.409208 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 15:47:54.489199 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:47:54.502048 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 15:47:54.504540 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:47:54.514585 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:47:54.517990 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:47:54.518615 (sd-merge)[1189]: Merged extensions into '/usr'. Feb 13 15:47:54.526529 systemd[1]: Reloading requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:47:54.526698 systemd[1]: Reloading... Feb 13 15:47:54.600680 zram_generator::config[1221]: No configuration found. Feb 13 15:47:54.722996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:47:54.768899 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:47:54.772310 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:47:54.772600 systemd[1]: Reloading finished in 245 ms. Feb 13 15:47:54.822196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:47:54.824090 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:47:54.825968 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:47:54.838331 systemd[1]: Starting ensure-sysext.service... Feb 13 15:47:54.840752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:47:54.844357 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:47:54.850022 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:47:54.854571 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:47:54.854589 systemd[1]: Reloading... Feb 13 15:47:54.898298 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:47:54.898315 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:47:54.910921 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:47:54.911287 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:47:54.913040 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:47:54.913373 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 15:47:54.913455 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 15:47:54.921150 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:47:54.921190 systemd-tmpfiles[1261]: Skipping /boot Feb 13 15:47:54.937414 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:47:54.937535 systemd-tmpfiles[1261]: Skipping /boot Feb 13 15:47:54.952189 zram_generator::config[1294]: No configuration found. Feb 13 15:47:55.072813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:47:55.129940 systemd[1]: Reloading finished in 274 ms. Feb 13 15:47:55.174047 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:47:55.175844 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:47:55.177717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:47:55.197414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:47:55.200738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:47:55.203680 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:47:55.209350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:47:55.219508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:47:55.225334 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:47:55.231135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:55.231336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:47:55.243457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:47:55.244305 augenrules[1357]: No rules Feb 13 15:47:55.247124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:47:55.249323 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Feb 13 15:47:55.250408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:47:55.254232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:47:55.258976 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:47:55.260201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:55.261534 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:47:55.261795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:47:55.263740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:47:55.265826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:47:55.266041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:47:55.267882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:47:55.268054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:47:55.270273 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:47:55.270436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:47:55.273826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:47:55.285656 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:47:55.294641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:47:55.304795 systemd[1]: Finished ensure-sysext.service. Feb 13 15:47:55.324054 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:47:55.331048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:47:55.331286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:55.334804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1387) Feb 13 15:47:55.342185 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:47:55.343497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:47:55.347043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:47:55.351346 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:47:55.359108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:47:55.363674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:47:55.365202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:47:55.400126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:47:55.404201 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:47:55.406610 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:47:55.410363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:47:55.410406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:47:55.411276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:47:55.411518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:47:55.413437 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:47:55.413608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:47:55.417257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:47:55.417436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:47:55.424129 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:47:55.425777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:47:55.439125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:47:55.441796 augenrules[1396]: /sbin/augenrules: No change Feb 13 15:47:55.451979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:47:55.452113 augenrules[1434]: No rules Feb 13 15:47:55.453672 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:47:55.453895 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:47:55.463206 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:47:55.469521 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:47:55.472447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:47:55.474035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:47:55.474125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:47:55.487198 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:47:55.490386 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:47:55.493109 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:47:55.493396 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:47:55.490911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:47:55.538852 systemd-resolved[1340]: Positive Trust Anchors: Feb 13 15:47:55.538874 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:47:55.538906 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:47:55.543381 systemd-resolved[1340]: Defaulting to hostname 'linux'. Feb 13 15:47:55.545196 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:47:55.547250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:47:55.549502 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:47:55.607260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:47:55.610086 systemd-networkd[1410]: lo: Link UP Feb 13 15:47:55.610465 systemd-networkd[1410]: lo: Gained carrier Feb 13 15:47:55.612317 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:47:55.612535 systemd-networkd[1410]: Enumeration completed Feb 13 15:47:55.613005 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:47:55.613067 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:47:55.613859 systemd-networkd[1410]: eth0: Link UP Feb 13 15:47:55.613914 systemd-networkd[1410]: eth0: Gained carrier Feb 13 15:47:55.613964 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:47:55.617139 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:47:55.624603 systemd[1]: Reached target network.target - Network. Feb 13 15:47:55.672059 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:47:55.683625 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:47:55.685454 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Feb 13 15:47:55.686295 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:47:56.163281 systemd-resolved[1340]: Clock change detected. Flushing caches. Feb 13 15:47:56.163699 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:47:56.163929 systemd-timesyncd[1415]: Initial clock synchronization to Thu 2025-02-13 15:47:56.163189 UTC. Feb 13 15:47:56.200234 kernel: kvm_amd: TSC scaling supported Feb 13 15:47:56.200312 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:47:56.200327 kernel: kvm_amd: Nested Paging enabled Feb 13 15:47:56.200812 kernel: kvm_amd: LBR virtualization supported Feb 13 15:47:56.202170 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:47:56.202260 kernel: kvm_amd: Virtual GIF supported Feb 13 15:47:56.223596 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:47:56.258962 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:47:56.281799 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:47:56.283586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:47:56.290189 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:47:56.320011 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:47:56.321671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:47:56.322936 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:47:56.324252 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:47:56.325818 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:47:56.327395 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:47:56.328718 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:47:56.330168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:47:56.331577 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:47:56.331607 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:47:56.332638 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:47:56.334597 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:47:56.337718 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:47:56.349187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:47:56.351946 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:47:56.353722 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:47:56.355010 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:47:56.356086 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:47:56.357205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:47:56.357235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:47:56.358385 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:47:56.360642 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:47:56.363643 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:47:56.365582 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:47:56.372890 jq[1465]: false Feb 13 15:47:56.382069 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:47:56.383182 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:47:56.384428 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:47:56.384850 dbus-daemon[1464]: [system] SELinux support is enabled Feb 13 15:47:56.389866 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:47:56.393230 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:47:56.397713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:47:56.402583 extend-filesystems[1466]: Found loop3 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found loop4 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found loop5 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found sr0 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda1 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda2 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda3 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found usr Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda4 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda6 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda7 Feb 13 15:47:56.402583 extend-filesystems[1466]: Found vda9 Feb 13 15:47:56.402583 extend-filesystems[1466]: Checking size of /dev/vda9 Feb 13 15:47:56.424111 extend-filesystems[1466]: Resized partition /dev/vda9 Feb 13 15:47:56.403876 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:47:56.406394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:47:56.407054 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:47:56.410024 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:47:56.418660 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:47:56.420863 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:47:56.424950 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:47:56.428079 jq[1483]: true Feb 13 15:47:56.428374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:47:56.428622 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:47:56.428943 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:47:56.429132 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:47:56.431218 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:47:56.432140 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:47:56.432333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:47:56.440616 update_engine[1479]: I20250213 15:47:56.439159 1479 main.cc:92] Flatcar Update Engine starting Feb 13 15:47:56.446604 jq[1489]: true Feb 13 15:47:56.447023 update_engine[1479]: I20250213 15:47:56.446962 1479 update_check_scheduler.cc:74] Next update check in 4m47s Feb 13 15:47:56.449961 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:47:56.449898 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:47:56.452216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:47:56.452264 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:47:56.455382 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:47:56.455414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:47:56.460699 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:47:56.463096 tar[1488]: linux-amd64/helm Feb 13 15:47:56.464824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:47:56.474761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1369) Feb 13 15:47:56.489025 systemd-logind[1478]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:47:56.489056 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:47:56.489955 systemd-logind[1478]: New seat seat0. Feb 13 15:47:56.519944 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:47:56.492082 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:47:56.522523 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:47:56.522523 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:47:56.522523 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:47:56.528246 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Feb 13 15:47:56.523858 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:47:56.524274 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:47:56.548151 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:47:56.550609 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:47:56.552784 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:47:56.602659 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:47:56.688354 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:47:56.748356 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:47:56.756813 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:47:56.766376 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:47:56.766701 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:47:56.769954 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:47:56.788078 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:47:56.799905 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:47:56.800135 containerd[1496]: time="2025-02-13T15:47:56.800055793Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:47:56.802823 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:47:56.804174 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:47:56.823692 containerd[1496]: time="2025-02-13T15:47:56.823645572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.825666 containerd[1496]: time="2025-02-13T15:47:56.825633490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:47:56.825666 containerd[1496]: time="2025-02-13T15:47:56.825658277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:47:56.825740 containerd[1496]: time="2025-02-13T15:47:56.825673325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:47:56.825867 containerd[1496]: time="2025-02-13T15:47:56.825842202Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:47:56.825867 containerd[1496]: time="2025-02-13T15:47:56.825863722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826091 containerd[1496]: time="2025-02-13T15:47:56.825932932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826091 containerd[1496]: time="2025-02-13T15:47:56.825952529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826165 containerd[1496]: time="2025-02-13T15:47:56.826143647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826165 containerd[1496]: time="2025-02-13T15:47:56.826161721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826209 containerd[1496]: time="2025-02-13T15:47:56.826175016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826209 containerd[1496]: time="2025-02-13T15:47:56.826184574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826325 containerd[1496]: time="2025-02-13T15:47:56.826282818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826561 containerd[1496]: time="2025-02-13T15:47:56.826516306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826712 containerd[1496]: time="2025-02-13T15:47:56.826660967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:47:56.826712 containerd[1496]: time="2025-02-13T15:47:56.826678129Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:47:56.826836 containerd[1496]: time="2025-02-13T15:47:56.826818372Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:47:56.826921 containerd[1496]: time="2025-02-13T15:47:56.826893824Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:47:56.833134 containerd[1496]: time="2025-02-13T15:47:56.833100594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:47:56.833172 containerd[1496]: time="2025-02-13T15:47:56.833149075Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:47:56.833172 containerd[1496]: time="2025-02-13T15:47:56.833164864Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:47:56.833211 containerd[1496]: time="2025-02-13T15:47:56.833180053Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:47:56.833211 containerd[1496]: time="2025-02-13T15:47:56.833193348Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:47:56.833345 containerd[1496]: time="2025-02-13T15:47:56.833325285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:47:56.833597 containerd[1496]: time="2025-02-13T15:47:56.833574883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:47:56.833709 containerd[1496]: time="2025-02-13T15:47:56.833689508Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:47:56.833737 containerd[1496]: time="2025-02-13T15:47:56.833716900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:47:56.833737 containerd[1496]: time="2025-02-13T15:47:56.833732218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:47:56.833783 containerd[1496]: time="2025-02-13T15:47:56.833746555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833783 containerd[1496]: time="2025-02-13T15:47:56.833760041Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833783 containerd[1496]: time="2025-02-13T15:47:56.833771322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833838 containerd[1496]: time="2025-02-13T15:47:56.833784206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833838 containerd[1496]: time="2025-02-13T15:47:56.833798052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833838 containerd[1496]: time="2025-02-13T15:47:56.833810806Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833838 containerd[1496]: time="2025-02-13T15:47:56.833822989Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833838 containerd[1496]: time="2025-02-13T15:47:56.833835702Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833855770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833869426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833881198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833893661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833906024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833918267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.833928 containerd[1496]: time="2025-02-13T15:47:56.833928917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.833940920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.833953964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.833967580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.833979272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.833989952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.834001864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.834018034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.834036549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834059 containerd[1496]: time="2025-02-13T15:47:56.834062087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834074460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834114806Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834129774Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834140604Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834151946Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834161143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834175550Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834185088Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:47:56.834219 containerd[1496]: time="2025-02-13T15:47:56.834195417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:47:56.834573 containerd[1496]: time="2025-02-13T15:47:56.834473569Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:47:56.834573 containerd[1496]: time="2025-02-13T15:47:56.834574027Z" level=info msg="Connect containerd service" Feb 13 15:47:56.834725 containerd[1496]: time="2025-02-13T15:47:56.834612139Z" level=info msg="using legacy CRI server" Feb 13 15:47:56.834725 containerd[1496]: time="2025-02-13T15:47:56.834619031Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:47:56.834763 containerd[1496]: time="2025-02-13T15:47:56.834753965Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:47:56.835367 containerd[1496]: time="2025-02-13T15:47:56.835326819Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:47:56.835646 containerd[1496]: time="2025-02-13T15:47:56.835617344Z" level=info msg="Start subscribing containerd event" Feb 13 15:47:56.835698 containerd[1496]: time="2025-02-13T15:47:56.835654624Z" level=info msg="Start recovering state" Feb 13 15:47:56.835720 containerd[1496]: time="2025-02-13T15:47:56.835709376Z" level=info msg="Start event monitor" Feb 13 15:47:56.835746 containerd[1496]: time="2025-02-13T15:47:56.835720046Z" level=info msg="Start snapshots syncer" Feb 13 15:47:56.835746 containerd[1496]: time="2025-02-13T15:47:56.835729294Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:47:56.835746 containerd[1496]: time="2025-02-13T15:47:56.835738040Z" level=info msg="Start streaming server" Feb 13 15:47:56.837793 containerd[1496]: time="2025-02-13T15:47:56.837759411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:47:56.838047 containerd[1496]: time="2025-02-13T15:47:56.838018547Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:47:56.838194 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:47:56.840608 containerd[1496]: time="2025-02-13T15:47:56.840580531Z" level=info msg="containerd successfully booted in 0.042191s" Feb 13 15:47:56.945336 tar[1488]: linux-amd64/LICENSE Feb 13 15:47:56.945384 tar[1488]: linux-amd64/README.md Feb 13 15:47:56.960772 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:47:57.065046 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:47:57.067439 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:42302.service - OpenSSH per-connection server daemon (10.0.0.1:42302). Feb 13 15:47:57.116159 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:47:57.118031 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:47:57.125726 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:47:57.139772 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:47:57.142786 systemd-logind[1478]: New session 1 of user core. Feb 13 15:47:57.150614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:47:57.154825 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:47:57.163172 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:47:57.262431 systemd[1560]: Queued start job for default target default.target. Feb 13 15:47:57.274934 systemd[1560]: Created slice app.slice - User Application Slice. Feb 13 15:47:57.274961 systemd[1560]: Reached target paths.target - Paths. Feb 13 15:47:57.274974 systemd[1560]: Reached target timers.target - Timers. Feb 13 15:47:57.276466 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:47:57.288108 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:47:57.288255 systemd[1560]: Reached target sockets.target - Sockets. Feb 13 15:47:57.288279 systemd[1560]: Reached target basic.target - Basic System. Feb 13 15:47:57.288323 systemd[1560]: Reached target default.target - Main User Target. Feb 13 15:47:57.288365 systemd[1560]: Startup finished in 118ms. Feb 13 15:47:57.288636 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:47:57.291113 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:47:57.355952 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). Feb 13 15:47:57.395632 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:47:57.396988 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:47:57.400712 systemd-logind[1478]: New session 2 of user core. Feb 13 15:47:57.410698 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:47:57.464707 sshd[1573]: Connection closed by 10.0.0.1 port 41614 Feb 13 15:47:57.465042 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Feb 13 15:47:57.480481 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:41614.service: Deactivated successfully. Feb 13 15:47:57.482135 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:47:57.483462 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:47:57.484695 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:41616.service - OpenSSH per-connection server daemon (10.0.0.1:41616). Feb 13 15:47:57.486826 systemd-logind[1478]: Removed session 2. Feb 13 15:47:57.518666 systemd-networkd[1410]: eth0: Gained IPv6LL Feb 13 15:47:57.521704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:47:57.522272 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 41616 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:47:57.523501 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:47:57.525869 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:47:57.531859 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:47:57.534528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:47:57.537036 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:47:57.552880 systemd-logind[1478]: New session 3 of user core. Feb 13 15:47:57.556312 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:47:57.559102 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:47:57.559371 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:47:57.562234 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:47:57.565199 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:47:57.616185 sshd[1597]: Connection closed by 10.0.0.1 port 41616 Feb 13 15:47:57.616524 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Feb 13 15:47:57.619696 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:41616.service: Deactivated successfully. Feb 13 15:47:57.621369 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:47:57.621946 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:47:57.622884 systemd-logind[1478]: Removed session 3. Feb 13 15:47:58.694371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:47:58.696531 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:47:58.698275 systemd[1]: Startup finished in 694ms (kernel) + 7.096s (initrd) + 5.476s (userspace) = 13.267s. Feb 13 15:47:58.699745 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:47:58.777557 agetty[1547]: failed to open credentials directory Feb 13 15:47:58.777572 agetty[1549]: failed to open credentials directory Feb 13 15:47:59.371887 kubelet[1606]: E0213 15:47:59.371823 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:47:59.375610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:47:59.375819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:47:59.376184 systemd[1]: kubelet.service: Consumed 1.679s CPU time. Feb 13 15:48:07.628376 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:44062.service - OpenSSH per-connection server daemon (10.0.0.1:44062). Feb 13 15:48:07.664587 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 44062 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:07.666265 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:07.670153 systemd-logind[1478]: New session 4 of user core. Feb 13 15:48:07.679679 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:48:07.734492 sshd[1622]: Connection closed by 10.0.0.1 port 44062 Feb 13 15:48:07.734885 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:07.742014 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:44062.service: Deactivated successfully. Feb 13 15:48:07.743708 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:48:07.745035 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:48:07.752780 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072). Feb 13 15:48:07.753783 systemd-logind[1478]: Removed session 4. Feb 13 15:48:07.786170 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:07.787681 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:07.791696 systemd-logind[1478]: New session 5 of user core. Feb 13 15:48:07.798666 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:48:07.847675 sshd[1629]: Connection closed by 10.0.0.1 port 44072 Feb 13 15:48:07.847934 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:07.857131 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:44072.service: Deactivated successfully. Feb 13 15:48:07.858661 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:48:07.859944 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:48:07.861193 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:44080.service - OpenSSH per-connection server daemon (10.0.0.1:44080). Feb 13 15:48:07.862031 systemd-logind[1478]: Removed session 5. Feb 13 15:48:07.900441 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 44080 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:07.902060 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:07.905968 systemd-logind[1478]: New session 6 of user core. Feb 13 15:48:07.915670 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:48:07.971054 sshd[1636]: Connection closed by 10.0.0.1 port 44080 Feb 13 15:48:07.971502 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:07.981949 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:44080.service: Deactivated successfully. Feb 13 15:48:07.983520 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:48:07.984925 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:48:07.996853 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:44090.service - OpenSSH per-connection server daemon (10.0.0.1:44090). Feb 13 15:48:07.997874 systemd-logind[1478]: Removed session 6. Feb 13 15:48:08.029243 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 44090 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:08.030568 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:08.034054 systemd-logind[1478]: New session 7 of user core. Feb 13 15:48:08.043656 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:48:08.101066 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:48:08.101411 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:48:08.121482 sudo[1644]: pam_unix(sudo:session): session closed for user root Feb 13 15:48:08.123173 sshd[1643]: Connection closed by 10.0.0.1 port 44090 Feb 13 15:48:08.123623 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:08.138153 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:44090.service: Deactivated successfully. Feb 13 15:48:08.139753 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:48:08.141130 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:48:08.142666 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:44100.service - OpenSSH per-connection server daemon (10.0.0.1:44100). Feb 13 15:48:08.143561 systemd-logind[1478]: Removed session 7. Feb 13 15:48:08.179152 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 44100 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:08.180680 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:08.184646 systemd-logind[1478]: New session 8 of user core. Feb 13 15:48:08.194646 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:48:08.247984 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:48:08.248399 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:48:08.252278 sudo[1653]: pam_unix(sudo:session): session closed for user root Feb 13 15:48:08.258703 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:48:08.259103 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:48:08.278803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:48:08.306916 augenrules[1675]: No rules Feb 13 15:48:08.308488 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:48:08.308773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:48:08.309985 sudo[1652]: pam_unix(sudo:session): session closed for user root Feb 13 15:48:08.311395 sshd[1651]: Connection closed by 10.0.0.1 port 44100 Feb 13 15:48:08.311706 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:08.324168 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:44100.service: Deactivated successfully. Feb 13 15:48:08.326066 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:48:08.327567 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:48:08.340927 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:44116.service - OpenSSH per-connection server daemon (10.0.0.1:44116). Feb 13 15:48:08.341942 systemd-logind[1478]: Removed session 8. Feb 13 15:48:08.372516 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 44116 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:48:08.373970 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:48:08.377822 systemd-logind[1478]: New session 9 of user core. Feb 13 15:48:08.390912 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:48:08.444972 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:48:08.445301 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:48:08.759908 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:48:08.760043 (dockerd)[1706]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:48:09.626133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:48:09.633426 dockerd[1706]: time="2025-02-13T15:48:09.633331205Z" level=info msg="Starting up" Feb 13 15:48:09.634746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:10.074497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:10.080621 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:48:10.145207 dockerd[1706]: time="2025-02-13T15:48:10.145148145Z" level=info msg="Loading containers: start." Feb 13 15:48:10.155045 kubelet[1740]: E0213 15:48:10.154932 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:48:10.162057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:48:10.162261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:48:10.340583 kernel: Initializing XFRM netlink socket Feb 13 15:48:10.419308 systemd-networkd[1410]: docker0: Link UP Feb 13 15:48:10.471508 dockerd[1706]: time="2025-02-13T15:48:10.471462290Z" level=info msg="Loading containers: done." Feb 13 15:48:10.494381 dockerd[1706]: time="2025-02-13T15:48:10.494333071Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:48:10.494512 dockerd[1706]: time="2025-02-13T15:48:10.494458226Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:48:10.494644 dockerd[1706]: time="2025-02-13T15:48:10.494613407Z" level=info msg="Daemon has completed initialization" Feb 13 15:48:10.534198 dockerd[1706]: time="2025-02-13T15:48:10.534101982Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:48:10.534360 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:48:11.101903 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3245861637-merged.mount: Deactivated successfully. Feb 13 15:48:11.702514 containerd[1496]: time="2025-02-13T15:48:11.702462555Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:48:12.814958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682179525.mount: Deactivated successfully. Feb 13 15:48:14.457688 containerd[1496]: time="2025-02-13T15:48:14.457610361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:14.460490 containerd[1496]: time="2025-02-13T15:48:14.460430850Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:48:14.465924 containerd[1496]: time="2025-02-13T15:48:14.465873187Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:14.469249 containerd[1496]: time="2025-02-13T15:48:14.469200847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:14.470451 containerd[1496]: time="2025-02-13T15:48:14.470415846Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.767900983s" Feb 13 15:48:14.470504 containerd[1496]: time="2025-02-13T15:48:14.470453987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:48:14.502756 containerd[1496]: time="2025-02-13T15:48:14.502680513Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:48:17.109667 containerd[1496]: time="2025-02-13T15:48:17.109610017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:17.110488 containerd[1496]: time="2025-02-13T15:48:17.110452807Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:48:17.111751 containerd[1496]: time="2025-02-13T15:48:17.111664359Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:17.114701 containerd[1496]: time="2025-02-13T15:48:17.114646772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:17.115789 containerd[1496]: time="2025-02-13T15:48:17.115744932Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.613013072s" Feb 13 15:48:17.115789 containerd[1496]: time="2025-02-13T15:48:17.115786159Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:48:17.143859 containerd[1496]: time="2025-02-13T15:48:17.143814362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:48:18.415123 containerd[1496]: time="2025-02-13T15:48:18.415052738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:18.415829 containerd[1496]: time="2025-02-13T15:48:18.415777918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:48:18.416992 containerd[1496]: time="2025-02-13T15:48:18.416946209Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:18.419751 containerd[1496]: time="2025-02-13T15:48:18.419712446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:18.421121 containerd[1496]: time="2025-02-13T15:48:18.421054724Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.277200807s" Feb 13 15:48:18.421121 containerd[1496]: time="2025-02-13T15:48:18.421109547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:48:18.449179 containerd[1496]: time="2025-02-13T15:48:18.449137759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:48:19.899671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090723140.mount: Deactivated successfully. Feb 13 15:48:20.262524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:48:20.348801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:20.498270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:20.516860 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:48:21.119808 kubelet[2028]: E0213 15:48:21.119730 2028 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:48:21.123985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:48:21.124183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:48:21.380709 containerd[1496]: time="2025-02-13T15:48:21.380524987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:21.381606 containerd[1496]: time="2025-02-13T15:48:21.381568374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:48:21.382880 containerd[1496]: time="2025-02-13T15:48:21.382826723Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:21.385057 containerd[1496]: time="2025-02-13T15:48:21.385008204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:21.385749 containerd[1496]: time="2025-02-13T15:48:21.385713798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.93653408s" Feb 13 15:48:21.385749 containerd[1496]: time="2025-02-13T15:48:21.385744365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:48:21.408908 containerd[1496]: time="2025-02-13T15:48:21.408858362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:48:22.376673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360092138.mount: Deactivated successfully. Feb 13 15:48:25.242538 containerd[1496]: time="2025-02-13T15:48:25.242486994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:25.243417 containerd[1496]: time="2025-02-13T15:48:25.243378506Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:48:25.244876 containerd[1496]: time="2025-02-13T15:48:25.244830749Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:25.248910 containerd[1496]: time="2025-02-13T15:48:25.248874643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:25.249852 containerd[1496]: time="2025-02-13T15:48:25.249819716Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.840911711s" Feb 13 15:48:25.249852 containerd[1496]: time="2025-02-13T15:48:25.249849552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:48:25.278178 containerd[1496]: time="2025-02-13T15:48:25.278141750Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:48:27.023252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224995677.mount: Deactivated successfully. Feb 13 15:48:27.079975 containerd[1496]: time="2025-02-13T15:48:27.079912793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:27.087758 containerd[1496]: time="2025-02-13T15:48:27.087691281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:48:27.096930 containerd[1496]: time="2025-02-13T15:48:27.096865105Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:27.104737 containerd[1496]: time="2025-02-13T15:48:27.104673288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:27.106346 containerd[1496]: time="2025-02-13T15:48:27.106267097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.828084471s" Feb 13 15:48:27.106400 containerd[1496]: time="2025-02-13T15:48:27.106346807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:48:27.136215 containerd[1496]: time="2025-02-13T15:48:27.136155438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:48:27.721672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702865094.mount: Deactivated successfully. Feb 13 15:48:31.220168 containerd[1496]: time="2025-02-13T15:48:31.220106347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:31.220882 containerd[1496]: time="2025-02-13T15:48:31.220818201Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:48:31.221970 containerd[1496]: time="2025-02-13T15:48:31.221934029Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:31.224617 containerd[1496]: time="2025-02-13T15:48:31.224585337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:48:31.225599 containerd[1496]: time="2025-02-13T15:48:31.225569322Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.089373648s" Feb 13 15:48:31.225599 containerd[1496]: time="2025-02-13T15:48:31.225596284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:48:31.262390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:48:31.279711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:31.429824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:31.436049 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:48:31.477011 kubelet[2167]: E0213 15:48:31.476851 2167 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:48:31.481599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:48:31.481814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:48:34.197646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:34.214756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:34.231478 systemd[1]: Reloading requested from client PID 2249 ('systemctl') (unit session-9.scope)... Feb 13 15:48:34.231493 systemd[1]: Reloading... Feb 13 15:48:34.315420 zram_generator::config[2291]: No configuration found. Feb 13 15:48:34.507935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:48:34.585419 systemd[1]: Reloading finished in 353 ms. Feb 13 15:48:34.638659 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:48:34.638782 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:48:34.639120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:34.640846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:34.793875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:34.799158 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:48:34.840464 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:48:34.840464 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:48:34.840464 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:48:34.840893 kubelet[2337]: I0213 15:48:34.840508 2337 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:48:35.218091 kubelet[2337]: I0213 15:48:35.217947 2337 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:48:35.218091 kubelet[2337]: I0213 15:48:35.217994 2337 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:48:35.218239 kubelet[2337]: I0213 15:48:35.218214 2337 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:48:35.232863 kubelet[2337]: I0213 15:48:35.232806 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:48:35.233300 kubelet[2337]: E0213 15:48:35.233266 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.245704 kubelet[2337]: I0213 15:48:35.245672 2337 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:48:35.247811 kubelet[2337]: I0213 15:48:35.247751 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:48:35.248032 kubelet[2337]: I0213 15:48:35.247801 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:48:35.248607 kubelet[2337]: I0213 15:48:35.248577 2337 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:48:35.248607 kubelet[2337]: I0213 15:48:35.248596 2337 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:48:35.248811 kubelet[2337]: I0213 15:48:35.248783 2337 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:48:35.249556 kubelet[2337]: I0213 15:48:35.249528 2337 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:48:35.249678 kubelet[2337]: I0213 15:48:35.249611 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:48:35.249678 kubelet[2337]: I0213 15:48:35.249652 2337 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:48:35.249737 kubelet[2337]: I0213 15:48:35.249692 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:48:35.251598 kubelet[2337]: W0213 15:48:35.251349 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.251598 kubelet[2337]: E0213 15:48:35.251421 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.252892 kubelet[2337]: W0213 15:48:35.252849 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.252961 kubelet[2337]: E0213 15:48:35.252904 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.255347 kubelet[2337]: I0213 15:48:35.255313 2337 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:48:35.256515 kubelet[2337]: I0213 15:48:35.256492 2337 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:48:35.256612 kubelet[2337]: W0213 15:48:35.256591 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:48:35.257360 kubelet[2337]: I0213 15:48:35.257332 2337 server.go:1264] "Started kubelet" Feb 13 15:48:35.258560 kubelet[2337]: I0213 15:48:35.257417 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:48:35.258560 kubelet[2337]: I0213 15:48:35.258410 2337 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.258817 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.258886 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.260462 2337 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:48:35.261558 kubelet[2337]: E0213 15:48:35.260802 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.260846 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.260923 2337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:48:35.261558 kubelet[2337]: I0213 15:48:35.260970 2337 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:48:35.261558 kubelet[2337]: W0213 15:48:35.261301 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.261558 kubelet[2337]: E0213 15:48:35.261333 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.261558 kubelet[2337]: E0213 15:48:35.261492 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Feb 13 15:48:35.264021 kubelet[2337]: I0213 15:48:35.263993 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:48:35.264927 kubelet[2337]: E0213 15:48:35.264905 2337 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:48:35.265645 kubelet[2337]: I0213 15:48:35.265255 2337 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:48:35.265645 kubelet[2337]: I0213 15:48:35.265270 2337 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:48:35.266945 kubelet[2337]: E0213 15:48:35.266800 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cf36c64d9650 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:48:35.25730056 +0000 UTC m=+0.454040743,LastTimestamp:2025-02-13 15:48:35.25730056 +0000 UTC m=+0.454040743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:48:35.276755 kubelet[2337]: I0213 15:48:35.276711 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:48:35.277902 kubelet[2337]: I0213 15:48:35.277884 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:48:35.277959 kubelet[2337]: I0213 15:48:35.277919 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:48:35.277959 kubelet[2337]: I0213 15:48:35.277942 2337 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:48:35.278012 kubelet[2337]: E0213 15:48:35.277980 2337 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:48:35.285689 kubelet[2337]: W0213 15:48:35.283225 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.285689 kubelet[2337]: E0213 15:48:35.283277 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:35.302744 kubelet[2337]: I0213 15:48:35.302692 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:48:35.302744 kubelet[2337]: I0213 15:48:35.302712 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:48:35.302744 kubelet[2337]: I0213 15:48:35.302730 2337 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:48:35.306772 kubelet[2337]: I0213 15:48:35.306758 2337 policy_none.go:49] "None policy: Start" Feb 13 15:48:35.307683 kubelet[2337]: I0213 15:48:35.307275 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:48:35.307683 kubelet[2337]: I0213 15:48:35.307302 2337 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:48:35.316583 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:48:35.331292 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:48:35.334755 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:48:35.344403 kubelet[2337]: I0213 15:48:35.344369 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:48:35.344774 kubelet[2337]: I0213 15:48:35.344594 2337 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:48:35.344774 kubelet[2337]: I0213 15:48:35.344713 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:48:35.345574 kubelet[2337]: E0213 15:48:35.345553 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:48:35.362363 kubelet[2337]: I0213 15:48:35.362342 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:35.362746 kubelet[2337]: E0213 15:48:35.362695 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 13 15:48:35.378993 kubelet[2337]: I0213 15:48:35.378951 2337 topology_manager.go:215] "Topology Admit Handler" podUID="ff32d1fa611dab9efd204acbcf41f8ae" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:48:35.379903 kubelet[2337]: I0213 15:48:35.379882 2337 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:48:35.380624 kubelet[2337]: I0213 15:48:35.380606 2337 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:48:35.386605 systemd[1]: Created slice kubepods-burstable-podff32d1fa611dab9efd204acbcf41f8ae.slice - libcontainer container kubepods-burstable-podff32d1fa611dab9efd204acbcf41f8ae.slice. Feb 13 15:48:35.399577 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:48:35.410163 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:48:35.462173 kubelet[2337]: E0213 15:48:35.462131 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Feb 13 15:48:35.562530 kubelet[2337]: I0213 15:48:35.562486 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:35.562608 kubelet[2337]: I0213 15:48:35.562529 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:35.562608 kubelet[2337]: I0213 15:48:35.562575 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:35.562608 kubelet[2337]: I0213 15:48:35.562600 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:35.562684 kubelet[2337]: I0213 15:48:35.562619 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:35.562684 kubelet[2337]: I0213 15:48:35.562641 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:35.562684 kubelet[2337]: I0213 15:48:35.562663 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:35.562753 kubelet[2337]: I0213 15:48:35.562688 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:48:35.562753 kubelet[2337]: I0213 15:48:35.562710 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:35.564408 kubelet[2337]: I0213 15:48:35.564371 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:35.564680 kubelet[2337]: E0213 15:48:35.564648 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 13 15:48:35.698303 kubelet[2337]: E0213 15:48:35.698258 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:35.698890 containerd[1496]: time="2025-02-13T15:48:35.698848947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff32d1fa611dab9efd204acbcf41f8ae,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:35.708005 kubelet[2337]: E0213 15:48:35.707967 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:35.708387 containerd[1496]: time="2025-02-13T15:48:35.708345440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:35.712602 kubelet[2337]: E0213 15:48:35.712579 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:35.712961 containerd[1496]: time="2025-02-13T15:48:35.712846110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:35.863597 kubelet[2337]: E0213 15:48:35.863433 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Feb 13 15:48:35.966250 kubelet[2337]: I0213 15:48:35.966205 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:35.966638 kubelet[2337]: E0213 15:48:35.966596 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 13 15:48:36.086919 kubelet[2337]: W0213 15:48:36.086838 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.086919 kubelet[2337]: E0213 15:48:36.086924 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.664996 kubelet[2337]: E0213 15:48:36.664930 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Feb 13 15:48:36.706898 kubelet[2337]: W0213 15:48:36.706780 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.706898 kubelet[2337]: E0213 15:48:36.706893 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.768935 kubelet[2337]: I0213 15:48:36.768889 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:36.769404 kubelet[2337]: E0213 15:48:36.769355 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 13 15:48:36.803162 kubelet[2337]: W0213 15:48:36.803078 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.803162 kubelet[2337]: E0213 15:48:36.803164 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.844797 kubelet[2337]: W0213 15:48:36.844708 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:36.844797 kubelet[2337]: E0213 15:48:36.844795 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:37.332104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383796652.mount: Deactivated successfully. Feb 13 15:48:37.363435 kubelet[2337]: E0213 15:48:37.363387 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.60:6443: connect: connection refused Feb 13 15:48:37.392238 containerd[1496]: time="2025-02-13T15:48:37.392176974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:48:37.409855 containerd[1496]: time="2025-02-13T15:48:37.409796517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:48:37.416213 containerd[1496]: time="2025-02-13T15:48:37.416055896Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:48:37.432808 containerd[1496]: time="2025-02-13T15:48:37.432767192Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:48:37.436229 containerd[1496]: time="2025-02-13T15:48:37.436195193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:48:37.448960 containerd[1496]: time="2025-02-13T15:48:37.448890187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:48:37.453658 containerd[1496]: time="2025-02-13T15:48:37.453573779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:48:37.460499 containerd[1496]: time="2025-02-13T15:48:37.460446786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:48:37.461213 containerd[1496]: time="2025-02-13T15:48:37.461183076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.762243508s" Feb 13 15:48:37.468251 containerd[1496]: time="2025-02-13T15:48:37.468211098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.759771568s" Feb 13 15:48:37.472012 containerd[1496]: time="2025-02-13T15:48:37.471988435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.759086788s" Feb 13 15:48:37.699708 containerd[1496]: time="2025-02-13T15:48:37.699511255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:37.699708 containerd[1496]: time="2025-02-13T15:48:37.699578813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:37.699708 containerd[1496]: time="2025-02-13T15:48:37.699592389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.701028 containerd[1496]: time="2025-02-13T15:48:37.700839031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.701028 containerd[1496]: time="2025-02-13T15:48:37.700854389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:37.701028 containerd[1496]: time="2025-02-13T15:48:37.700904776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:37.701028 containerd[1496]: time="2025-02-13T15:48:37.700918281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.701028 containerd[1496]: time="2025-02-13T15:48:37.700987032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.704282 containerd[1496]: time="2025-02-13T15:48:37.704182482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:37.704370 containerd[1496]: time="2025-02-13T15:48:37.704328319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:37.704404 containerd[1496]: time="2025-02-13T15:48:37.704382553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.704574 containerd[1496]: time="2025-02-13T15:48:37.704505015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:37.733682 systemd[1]: Started cri-containerd-061e12afaad7fe05491e110aa26fadd251c7d7002f3cc82fa9eb9605aa9242ed.scope - libcontainer container 061e12afaad7fe05491e110aa26fadd251c7d7002f3cc82fa9eb9605aa9242ed. Feb 13 15:48:37.735521 systemd[1]: Started cri-containerd-4182e0826a6b4bd68ff9a85d0671e7f2d7b4d254f76991747ca926bca19c3bff.scope - libcontainer container 4182e0826a6b4bd68ff9a85d0671e7f2d7b4d254f76991747ca926bca19c3bff. Feb 13 15:48:37.858170 systemd[1]: Started cri-containerd-b07a65908b11520d0480aa8f90588daf3e04d6a09adade37111ca5b54deec9ee.scope - libcontainer container b07a65908b11520d0480aa8f90588daf3e04d6a09adade37111ca5b54deec9ee. Feb 13 15:48:37.934565 containerd[1496]: time="2025-02-13T15:48:37.934487304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"b07a65908b11520d0480aa8f90588daf3e04d6a09adade37111ca5b54deec9ee\"" Feb 13 15:48:37.935684 kubelet[2337]: E0213 15:48:37.935663 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:37.937888 containerd[1496]: time="2025-02-13T15:48:37.937730996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff32d1fa611dab9efd204acbcf41f8ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"4182e0826a6b4bd68ff9a85d0671e7f2d7b4d254f76991747ca926bca19c3bff\"" Feb 13 15:48:37.939659 kubelet[2337]: E0213 15:48:37.939644 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:37.940355 containerd[1496]: time="2025-02-13T15:48:37.940323809Z" level=info msg="CreateContainer within sandbox \"b07a65908b11520d0480aa8f90588daf3e04d6a09adade37111ca5b54deec9ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:48:37.941334 containerd[1496]: time="2025-02-13T15:48:37.941314134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"061e12afaad7fe05491e110aa26fadd251c7d7002f3cc82fa9eb9605aa9242ed\"" Feb 13 15:48:37.941753 kubelet[2337]: E0213 15:48:37.941739 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:37.942571 containerd[1496]: time="2025-02-13T15:48:37.942509277Z" level=info msg="CreateContainer within sandbox \"4182e0826a6b4bd68ff9a85d0671e7f2d7b4d254f76991747ca926bca19c3bff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:48:37.943018 containerd[1496]: time="2025-02-13T15:48:37.942995021Z" level=info msg="CreateContainer within sandbox \"061e12afaad7fe05491e110aa26fadd251c7d7002f3cc82fa9eb9605aa9242ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:48:38.147441 containerd[1496]: time="2025-02-13T15:48:38.147382025Z" level=info msg="CreateContainer within sandbox \"b07a65908b11520d0480aa8f90588daf3e04d6a09adade37111ca5b54deec9ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbfdf3211a9a2288256c2813f31b3a68047bec9cdaf61225a296882598ba1603\"" Feb 13 15:48:38.148205 containerd[1496]: time="2025-02-13T15:48:38.148163009Z" level=info msg="StartContainer for \"dbfdf3211a9a2288256c2813f31b3a68047bec9cdaf61225a296882598ba1603\"" Feb 13 15:48:38.174671 systemd[1]: Started cri-containerd-dbfdf3211a9a2288256c2813f31b3a68047bec9cdaf61225a296882598ba1603.scope - libcontainer container dbfdf3211a9a2288256c2813f31b3a68047bec9cdaf61225a296882598ba1603. Feb 13 15:48:38.190016 containerd[1496]: time="2025-02-13T15:48:38.189968337Z" level=info msg="CreateContainer within sandbox \"4182e0826a6b4bd68ff9a85d0671e7f2d7b4d254f76991747ca926bca19c3bff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ab0618337c174fa4a0a69550b3e9cefce9c13337d9cf2a3e593a272ffcd1263b\"" Feb 13 15:48:38.191963 containerd[1496]: time="2025-02-13T15:48:38.190662898Z" level=info msg="StartContainer for \"ab0618337c174fa4a0a69550b3e9cefce9c13337d9cf2a3e593a272ffcd1263b\"" Feb 13 15:48:38.221062 systemd[1]: Started cri-containerd-ab0618337c174fa4a0a69550b3e9cefce9c13337d9cf2a3e593a272ffcd1263b.scope - libcontainer container ab0618337c174fa4a0a69550b3e9cefce9c13337d9cf2a3e593a272ffcd1263b. Feb 13 15:48:38.228644 containerd[1496]: time="2025-02-13T15:48:38.228592037Z" level=info msg="CreateContainer within sandbox \"061e12afaad7fe05491e110aa26fadd251c7d7002f3cc82fa9eb9605aa9242ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f93b7aa01091812bb82c4633ef31e674dcf04d6db93c51a8629210bc0a164193\"" Feb 13 15:48:38.229233 containerd[1496]: time="2025-02-13T15:48:38.228626413Z" level=info msg="StartContainer for \"dbfdf3211a9a2288256c2813f31b3a68047bec9cdaf61225a296882598ba1603\" returns successfully" Feb 13 15:48:38.230500 containerd[1496]: time="2025-02-13T15:48:38.229372130Z" level=info msg="StartContainer for \"f93b7aa01091812bb82c4633ef31e674dcf04d6db93c51a8629210bc0a164193\"" Feb 13 15:48:38.253682 systemd[1]: Started cri-containerd-f93b7aa01091812bb82c4633ef31e674dcf04d6db93c51a8629210bc0a164193.scope - libcontainer container f93b7aa01091812bb82c4633ef31e674dcf04d6db93c51a8629210bc0a164193. Feb 13 15:48:38.266276 kubelet[2337]: E0213 15:48:38.266225 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="3.2s" Feb 13 15:48:38.314092 containerd[1496]: time="2025-02-13T15:48:38.313971457Z" level=info msg="StartContainer for \"ab0618337c174fa4a0a69550b3e9cefce9c13337d9cf2a3e593a272ffcd1263b\" returns successfully" Feb 13 15:48:38.314092 containerd[1496]: time="2025-02-13T15:48:38.314063933Z" level=info msg="StartContainer for \"f93b7aa01091812bb82c4633ef31e674dcf04d6db93c51a8629210bc0a164193\" returns successfully" Feb 13 15:48:38.323565 kubelet[2337]: E0213 15:48:38.322366 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:38.323565 kubelet[2337]: E0213 15:48:38.323321 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:38.371250 kubelet[2337]: I0213 15:48:38.371214 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:39.325499 kubelet[2337]: E0213 15:48:39.325453 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:39.329564 kubelet[2337]: E0213 15:48:39.326185 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:39.639348 kubelet[2337]: I0213 15:48:39.639202 2337 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:48:39.678691 kubelet[2337]: E0213 15:48:39.678645 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:48:40.253995 kubelet[2337]: I0213 15:48:40.253943 2337 apiserver.go:52] "Watching apiserver" Feb 13 15:48:40.261649 kubelet[2337]: I0213 15:48:40.261618 2337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:48:41.434755 update_engine[1479]: I20250213 15:48:41.434656 1479 update_attempter.cc:509] Updating boot flags... Feb 13 15:48:41.472571 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2626) Feb 13 15:48:41.508715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2630) Feb 13 15:48:41.736471 systemd[1]: Reloading requested from client PID 2634 ('systemctl') (unit session-9.scope)... Feb 13 15:48:41.736488 systemd[1]: Reloading... Feb 13 15:48:41.810599 zram_generator::config[2673]: No configuration found. Feb 13 15:48:41.926101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:48:42.015404 systemd[1]: Reloading finished in 278 ms. Feb 13 15:48:42.067323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:42.081948 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:48:42.082211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:42.082260 systemd[1]: kubelet.service: Consumed 1.110s CPU time, 115.7M memory peak, 0B memory swap peak. Feb 13 15:48:42.091874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:48:42.237468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:48:42.241981 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:48:42.281731 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:48:42.281731 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:48:42.281731 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:48:42.281731 kubelet[2718]: I0213 15:48:42.281690 2718 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:48:42.286244 kubelet[2718]: I0213 15:48:42.286201 2718 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:48:42.286244 kubelet[2718]: I0213 15:48:42.286226 2718 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:48:42.286418 kubelet[2718]: I0213 15:48:42.286412 2718 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:48:42.287760 kubelet[2718]: I0213 15:48:42.287736 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:48:42.289445 kubelet[2718]: I0213 15:48:42.289413 2718 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:48:42.302625 kubelet[2718]: I0213 15:48:42.302596 2718 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:48:42.302913 kubelet[2718]: I0213 15:48:42.302867 2718 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:48:42.303128 kubelet[2718]: I0213 15:48:42.302905 2718 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:48:42.303219 kubelet[2718]: I0213 15:48:42.303136 2718 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:48:42.303219 kubelet[2718]: I0213 15:48:42.303150 2718 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:48:42.303219 kubelet[2718]: I0213 15:48:42.303205 2718 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:48:42.303339 kubelet[2718]: I0213 15:48:42.303321 2718 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:48:42.303368 kubelet[2718]: I0213 15:48:42.303340 2718 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:48:42.303368 kubelet[2718]: I0213 15:48:42.303366 2718 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:48:42.303417 kubelet[2718]: I0213 15:48:42.303390 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:48:42.306162 kubelet[2718]: I0213 15:48:42.304432 2718 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:48:42.306974 kubelet[2718]: I0213 15:48:42.306937 2718 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:48:42.309957 kubelet[2718]: I0213 15:48:42.308730 2718 server.go:1264] "Started kubelet" Feb 13 15:48:42.310150 kubelet[2718]: I0213 15:48:42.310097 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:48:42.311573 kubelet[2718]: I0213 15:48:42.310393 2718 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:48:42.311573 kubelet[2718]: I0213 15:48:42.310436 2718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:48:42.311573 kubelet[2718]: I0213 15:48:42.310618 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:48:42.311573 kubelet[2718]: I0213 15:48:42.311323 2718 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:48:42.314527 kubelet[2718]: I0213 15:48:42.312031 2718 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:48:42.314527 kubelet[2718]: I0213 15:48:42.312107 2718 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:48:42.314527 kubelet[2718]: I0213 15:48:42.312220 2718 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:48:42.320071 kubelet[2718]: I0213 15:48:42.320045 2718 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:48:42.320224 kubelet[2718]: I0213 15:48:42.320214 2718 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:48:42.320363 kubelet[2718]: I0213 15:48:42.320345 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:48:42.323341 kubelet[2718]: E0213 15:48:42.323317 2718 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:48:42.333381 kubelet[2718]: I0213 15:48:42.333257 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:48:42.336888 kubelet[2718]: I0213 15:48:42.334752 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:48:42.336888 kubelet[2718]: I0213 15:48:42.334801 2718 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:48:42.336888 kubelet[2718]: I0213 15:48:42.334819 2718 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:48:42.336888 kubelet[2718]: E0213 15:48:42.334860 2718 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:48:42.363028 kubelet[2718]: I0213 15:48:42.362997 2718 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:48:42.363028 kubelet[2718]: I0213 15:48:42.363018 2718 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:48:42.363190 kubelet[2718]: I0213 15:48:42.363046 2718 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:48:42.363229 kubelet[2718]: I0213 15:48:42.363211 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:48:42.363253 kubelet[2718]: I0213 15:48:42.363225 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:48:42.363253 kubelet[2718]: I0213 15:48:42.363245 2718 policy_none.go:49] "None policy: Start" Feb 13 15:48:42.363826 kubelet[2718]: I0213 15:48:42.363810 2718 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:48:42.363875 kubelet[2718]: I0213 15:48:42.363832 2718 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:48:42.363979 kubelet[2718]: I0213 15:48:42.363967 2718 state_mem.go:75] "Updated machine memory state" Feb 13 15:48:42.368401 kubelet[2718]: I0213 15:48:42.368315 2718 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:48:42.368561 kubelet[2718]: I0213 15:48:42.368475 2718 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:48:42.368675 kubelet[2718]: I0213 15:48:42.368581 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:48:42.416484 kubelet[2718]: I0213 15:48:42.416451 2718 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:48:42.422155 kubelet[2718]: I0213 15:48:42.422122 2718 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:48:42.422336 kubelet[2718]: I0213 15:48:42.422216 2718 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:48:42.435443 kubelet[2718]: I0213 15:48:42.435390 2718 topology_manager.go:215] "Topology Admit Handler" podUID="ff32d1fa611dab9efd204acbcf41f8ae" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:48:42.435614 kubelet[2718]: I0213 15:48:42.435533 2718 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:48:42.435614 kubelet[2718]: I0213 15:48:42.435579 2718 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:48:42.613039 kubelet[2718]: I0213 15:48:42.612887 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:42.613039 kubelet[2718]: I0213 15:48:42.612933 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:42.613039 kubelet[2718]: I0213 15:48:42.612954 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:42.613039 kubelet[2718]: I0213 15:48:42.612971 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:42.613039 kubelet[2718]: I0213 15:48:42.612987 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:42.613288 kubelet[2718]: I0213 15:48:42.613000 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:48:42.613288 kubelet[2718]: I0213 15:48:42.613013 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff32d1fa611dab9efd204acbcf41f8ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff32d1fa611dab9efd204acbcf41f8ae\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:48:42.613288 kubelet[2718]: I0213 15:48:42.613028 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:42.613288 kubelet[2718]: I0213 15:48:42.613041 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:48:42.734856 kubelet[2718]: E0213 15:48:42.734627 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:42.734856 kubelet[2718]: E0213 15:48:42.734835 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:42.819658 sudo[2753]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:48:42.820028 sudo[2753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:48:42.998834 kubelet[2718]: E0213 15:48:42.998699 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:43.304440 kubelet[2718]: I0213 15:48:43.304399 2718 apiserver.go:52] "Watching apiserver" Feb 13 15:48:43.309330 sudo[2753]: pam_unix(sudo:session): session closed for user root Feb 13 15:48:43.312745 kubelet[2718]: I0213 15:48:43.312714 2718 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:48:43.350007 kubelet[2718]: E0213 15:48:43.349954 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:43.350416 kubelet[2718]: E0213 15:48:43.350358 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:43.350867 kubelet[2718]: E0213 15:48:43.350842 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:43.358621 kubelet[2718]: I0213 15:48:43.358529 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.358506685 podStartE2EDuration="1.358506685s" podCreationTimestamp="2025-02-13 15:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:48:43.3577229 +0000 UTC m=+1.112055109" watchObservedRunningTime="2025-02-13 15:48:43.358506685 +0000 UTC m=+1.112838904" Feb 13 15:48:43.451950 kubelet[2718]: I0213 15:48:43.451865 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.451838303 podStartE2EDuration="1.451838303s" podCreationTimestamp="2025-02-13 15:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:48:43.45155654 +0000 UTC m=+1.205888759" watchObservedRunningTime="2025-02-13 15:48:43.451838303 +0000 UTC m=+1.206170522" Feb 13 15:48:43.514073 kubelet[2718]: I0213 15:48:43.513506 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.51348833 podStartE2EDuration="1.51348833s" podCreationTimestamp="2025-02-13 15:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:48:43.496976629 +0000 UTC m=+1.251308848" watchObservedRunningTime="2025-02-13 15:48:43.51348833 +0000 UTC m=+1.267820549" Feb 13 15:48:44.353057 kubelet[2718]: E0213 15:48:44.352988 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:44.353057 kubelet[2718]: E0213 15:48:44.353013 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:44.691775 sudo[1686]: pam_unix(sudo:session): session closed for user root Feb 13 15:48:44.693181 sshd[1685]: Connection closed by 10.0.0.1 port 44116 Feb 13 15:48:44.693791 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Feb 13 15:48:44.698403 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:44116.service: Deactivated successfully. Feb 13 15:48:44.700502 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:48:44.700743 systemd[1]: session-9.scope: Consumed 5.627s CPU time, 190.6M memory peak, 0B memory swap peak. Feb 13 15:48:44.701282 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:48:44.702243 systemd-logind[1478]: Removed session 9. Feb 13 15:48:46.160404 kubelet[2718]: E0213 15:48:46.160362 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:52.192849 kubelet[2718]: E0213 15:48:52.192806 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:52.363468 kubelet[2718]: E0213 15:48:52.363431 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:54.187098 kubelet[2718]: E0213 15:48:54.187064 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:56.156940 kubelet[2718]: I0213 15:48:56.156887 2718 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:48:56.157433 containerd[1496]: time="2025-02-13T15:48:56.157242864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:48:56.157760 kubelet[2718]: I0213 15:48:56.157434 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:48:56.171222 kubelet[2718]: E0213 15:48:56.171068 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:57.595941 kubelet[2718]: I0213 15:48:57.595559 2718 topology_manager.go:215] "Topology Admit Handler" podUID="b1984ecf-6ac5-441f-b033-b020930919d0" podNamespace="kube-system" podName="kube-proxy-db6lv" Feb 13 15:48:57.601881 systemd[1]: Created slice kubepods-besteffort-podb1984ecf_6ac5_441f_b033_b020930919d0.slice - libcontainer container kubepods-besteffort-podb1984ecf_6ac5_441f_b033_b020930919d0.slice. Feb 13 15:48:57.603740 kubelet[2718]: I0213 15:48:57.603712 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1984ecf-6ac5-441f-b033-b020930919d0-xtables-lock\") pod \"kube-proxy-db6lv\" (UID: \"b1984ecf-6ac5-441f-b033-b020930919d0\") " pod="kube-system/kube-proxy-db6lv" Feb 13 15:48:57.603793 kubelet[2718]: I0213 15:48:57.603739 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1984ecf-6ac5-441f-b033-b020930919d0-lib-modules\") pod \"kube-proxy-db6lv\" (UID: \"b1984ecf-6ac5-441f-b033-b020930919d0\") " pod="kube-system/kube-proxy-db6lv" Feb 13 15:48:57.603793 kubelet[2718]: I0213 15:48:57.603756 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1984ecf-6ac5-441f-b033-b020930919d0-kube-proxy\") pod \"kube-proxy-db6lv\" (UID: \"b1984ecf-6ac5-441f-b033-b020930919d0\") " pod="kube-system/kube-proxy-db6lv" Feb 13 15:48:57.603793 kubelet[2718]: I0213 15:48:57.603773 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfcd\" (UniqueName: \"kubernetes.io/projected/b1984ecf-6ac5-441f-b033-b020930919d0-kube-api-access-7bfcd\") pod \"kube-proxy-db6lv\" (UID: \"b1984ecf-6ac5-441f-b033-b020930919d0\") " pod="kube-system/kube-proxy-db6lv" Feb 13 15:48:57.619266 kubelet[2718]: I0213 15:48:57.619231 2718 topology_manager.go:215] "Topology Admit Handler" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" podNamespace="kube-system" podName="cilium-7hl9x" Feb 13 15:48:57.627394 systemd[1]: Created slice kubepods-burstable-podbfa03493_0fcc_4823_a50a_f1211ddf3e96.slice - libcontainer container kubepods-burstable-podbfa03493_0fcc_4823_a50a_f1211ddf3e96.slice. Feb 13 15:48:57.704262 kubelet[2718]: I0213 15:48:57.704216 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-run\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704262 kubelet[2718]: I0213 15:48:57.704251 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-cgroup\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704262 kubelet[2718]: I0213 15:48:57.704269 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-lib-modules\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704453 kubelet[2718]: I0213 15:48:57.704284 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfa03493-0fcc-4823-a50a-f1211ddf3e96-clustermesh-secrets\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704453 kubelet[2718]: I0213 15:48:57.704302 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-etc-cni-netd\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704453 kubelet[2718]: I0213 15:48:57.704317 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-kernel\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704453 kubelet[2718]: I0213 15:48:57.704340 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9dq\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-kube-api-access-dk9dq\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704453 kubelet[2718]: I0213 15:48:57.704413 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-config-path\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704594 kubelet[2718]: I0213 15:48:57.704502 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-xtables-lock\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704594 kubelet[2718]: I0213 15:48:57.704521 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-net\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704594 kubelet[2718]: I0213 15:48:57.704556 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-bpf-maps\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704594 kubelet[2718]: I0213 15:48:57.704580 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hostproc\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704690 kubelet[2718]: I0213 15:48:57.704612 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cni-path\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:57.704690 kubelet[2718]: I0213 15:48:57.704646 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hubble-tls\") pod \"cilium-7hl9x\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " pod="kube-system/cilium-7hl9x" Feb 13 15:48:58.419271 kubelet[2718]: I0213 15:48:58.418770 2718 topology_manager.go:215] "Topology Admit Handler" podUID="d4048bd4-6051-425b-a63c-aa9843d3cf79" podNamespace="kube-system" podName="cilium-operator-599987898-fh6vd" Feb 13 15:48:58.427629 systemd[1]: Created slice kubepods-besteffort-podd4048bd4_6051_425b_a63c_aa9843d3cf79.slice - libcontainer container kubepods-besteffort-podd4048bd4_6051_425b_a63c_aa9843d3cf79.slice. Feb 13 15:48:58.509838 kubelet[2718]: I0213 15:48:58.509782 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4048bd4-6051-425b-a63c-aa9843d3cf79-cilium-config-path\") pod \"cilium-operator-599987898-fh6vd\" (UID: \"d4048bd4-6051-425b-a63c-aa9843d3cf79\") " pod="kube-system/cilium-operator-599987898-fh6vd" Feb 13 15:48:58.509838 kubelet[2718]: I0213 15:48:58.509823 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwcv\" (UniqueName: \"kubernetes.io/projected/d4048bd4-6051-425b-a63c-aa9843d3cf79-kube-api-access-6hwcv\") pod \"cilium-operator-599987898-fh6vd\" (UID: \"d4048bd4-6051-425b-a63c-aa9843d3cf79\") " pod="kube-system/cilium-operator-599987898-fh6vd" Feb 13 15:48:58.513523 kubelet[2718]: E0213 15:48:58.513495 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:58.513994 containerd[1496]: time="2025-02-13T15:48:58.513951801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db6lv,Uid:b1984ecf-6ac5-441f-b033-b020930919d0,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:58.530273 kubelet[2718]: E0213 15:48:58.530227 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:58.530733 containerd[1496]: time="2025-02-13T15:48:58.530683366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hl9x,Uid:bfa03493-0fcc-4823-a50a-f1211ddf3e96,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:58.731766 kubelet[2718]: E0213 15:48:58.731645 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:58.732273 containerd[1496]: time="2025-02-13T15:48:58.732138683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fh6vd,Uid:d4048bd4-6051-425b-a63c-aa9843d3cf79,Namespace:kube-system,Attempt:0,}" Feb 13 15:48:59.559070 containerd[1496]: time="2025-02-13T15:48:59.557571684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:59.559070 containerd[1496]: time="2025-02-13T15:48:59.558737267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:59.559070 containerd[1496]: time="2025-02-13T15:48:59.558754710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.559070 containerd[1496]: time="2025-02-13T15:48:59.558850641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.584810 systemd[1]: Started cri-containerd-c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9.scope - libcontainer container c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9. Feb 13 15:48:59.589462 containerd[1496]: time="2025-02-13T15:48:59.588784706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:59.591381 containerd[1496]: time="2025-02-13T15:48:59.589443655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:59.591381 containerd[1496]: time="2025-02-13T15:48:59.589464835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.591381 containerd[1496]: time="2025-02-13T15:48:59.589604859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.616704 systemd[1]: Started cri-containerd-f5344b751e938821072f382160bdc7dc5038313acd70a767f7cf9693dfb16306.scope - libcontainer container f5344b751e938821072f382160bdc7dc5038313acd70a767f7cf9693dfb16306. Feb 13 15:48:59.617892 containerd[1496]: time="2025-02-13T15:48:59.617834997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hl9x,Uid:bfa03493-0fcc-4823-a50a-f1211ddf3e96,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\"" Feb 13 15:48:59.618487 kubelet[2718]: E0213 15:48:59.618466 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:59.620235 containerd[1496]: time="2025-02-13T15:48:59.620135186Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:48:59.629012 containerd[1496]: time="2025-02-13T15:48:59.628871526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:48:59.629012 containerd[1496]: time="2025-02-13T15:48:59.628951948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:48:59.629012 containerd[1496]: time="2025-02-13T15:48:59.628963530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.629809 containerd[1496]: time="2025-02-13T15:48:59.629711508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:48:59.643756 containerd[1496]: time="2025-02-13T15:48:59.643423102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db6lv,Uid:b1984ecf-6ac5-441f-b033-b020930919d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5344b751e938821072f382160bdc7dc5038313acd70a767f7cf9693dfb16306\"" Feb 13 15:48:59.644469 kubelet[2718]: E0213 15:48:59.644450 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:59.650019 systemd[1]: Started cri-containerd-9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263.scope - libcontainer container 9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263. Feb 13 15:48:59.650419 containerd[1496]: time="2025-02-13T15:48:59.650130806Z" level=info msg="CreateContainer within sandbox \"f5344b751e938821072f382160bdc7dc5038313acd70a767f7cf9693dfb16306\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:48:59.679857 containerd[1496]: time="2025-02-13T15:48:59.679786927Z" level=info msg="CreateContainer within sandbox \"f5344b751e938821072f382160bdc7dc5038313acd70a767f7cf9693dfb16306\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7129d7d5c4c6e1df489295318cc829c224da57dd7fc8abf70460869214af7ca1\"" Feb 13 15:48:59.680670 containerd[1496]: time="2025-02-13T15:48:59.680637097Z" level=info msg="StartContainer for \"7129d7d5c4c6e1df489295318cc829c224da57dd7fc8abf70460869214af7ca1\"" Feb 13 15:48:59.700299 containerd[1496]: time="2025-02-13T15:48:59.700237344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fh6vd,Uid:d4048bd4-6051-425b-a63c-aa9843d3cf79,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\"" Feb 13 15:48:59.701265 kubelet[2718]: E0213 15:48:59.701238 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:48:59.717746 systemd[1]: Started cri-containerd-7129d7d5c4c6e1df489295318cc829c224da57dd7fc8abf70460869214af7ca1.scope - libcontainer container 7129d7d5c4c6e1df489295318cc829c224da57dd7fc8abf70460869214af7ca1. Feb 13 15:48:59.756487 containerd[1496]: time="2025-02-13T15:48:59.756444663Z" level=info msg="StartContainer for \"7129d7d5c4c6e1df489295318cc829c224da57dd7fc8abf70460869214af7ca1\" returns successfully" Feb 13 15:49:00.377864 kubelet[2718]: E0213 15:49:00.377832 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:11.615653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523890649.mount: Deactivated successfully. Feb 13 15:49:14.140508 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). Feb 13 15:49:14.213934 sshd[3104]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:14.216403 sshd-session[3104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:14.224110 systemd-logind[1478]: New session 10 of user core. Feb 13 15:49:14.228673 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:49:14.375567 sshd[3106]: Connection closed by 10.0.0.1 port 45818 Feb 13 15:49:14.373851 sshd-session[3104]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:14.377630 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:49:14.378317 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:45818.service: Deactivated successfully. Feb 13 15:49:14.380837 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:49:14.382637 systemd-logind[1478]: Removed session 10. Feb 13 15:49:14.957343 containerd[1496]: time="2025-02-13T15:49:14.957281041Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:49:14.958048 containerd[1496]: time="2025-02-13T15:49:14.958001524Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:49:14.959270 containerd[1496]: time="2025-02-13T15:49:14.959233708Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:49:14.960888 containerd[1496]: time="2025-02-13T15:49:14.960856706Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.340682337s" Feb 13 15:49:14.960952 containerd[1496]: time="2025-02-13T15:49:14.960889918Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:49:14.967122 containerd[1496]: time="2025-02-13T15:49:14.967080543Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:49:14.988177 containerd[1496]: time="2025-02-13T15:49:14.988126775Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:49:15.001818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798683570.mount: Deactivated successfully. Feb 13 15:49:15.005567 containerd[1496]: time="2025-02-13T15:49:15.005489599Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\"" Feb 13 15:49:15.009133 containerd[1496]: time="2025-02-13T15:49:15.009071886Z" level=info msg="StartContainer for \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\"" Feb 13 15:49:15.039709 systemd[1]: Started cri-containerd-b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b.scope - libcontainer container b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b. Feb 13 15:49:15.101813 containerd[1496]: time="2025-02-13T15:49:15.101754305Z" level=info msg="StartContainer for \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\" returns successfully" Feb 13 15:49:15.113284 systemd[1]: cri-containerd-b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b.scope: Deactivated successfully. Feb 13 15:49:15.827377 kubelet[2718]: E0213 15:49:15.827338 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:15.865872 containerd[1496]: time="2025-02-13T15:49:15.864250388Z" level=info msg="shim disconnected" id=b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b namespace=k8s.io Feb 13 15:49:15.865872 containerd[1496]: time="2025-02-13T15:49:15.864311753Z" level=warning msg="cleaning up after shim disconnected" id=b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b namespace=k8s.io Feb 13 15:49:15.865872 containerd[1496]: time="2025-02-13T15:49:15.864319999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:49:15.868721 kubelet[2718]: I0213 15:49:15.868656 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-db6lv" podStartSLOduration=18.868638868 podStartE2EDuration="18.868638868s" podCreationTimestamp="2025-02-13 15:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:49:00.410827128 +0000 UTC m=+18.165159337" watchObservedRunningTime="2025-02-13 15:49:15.868638868 +0000 UTC m=+33.622971087" Feb 13 15:49:15.998916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b-rootfs.mount: Deactivated successfully. Feb 13 15:49:16.830353 kubelet[2718]: E0213 15:49:16.830314 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:16.832534 containerd[1496]: time="2025-02-13T15:49:16.832487468Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:49:16.865048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769070797.mount: Deactivated successfully. Feb 13 15:49:16.871352 containerd[1496]: time="2025-02-13T15:49:16.871319187Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\"" Feb 13 15:49:16.872474 containerd[1496]: time="2025-02-13T15:49:16.871792044Z" level=info msg="StartContainer for \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\"" Feb 13 15:49:16.900682 systemd[1]: Started cri-containerd-1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca.scope - libcontainer container 1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca. Feb 13 15:49:16.930559 containerd[1496]: time="2025-02-13T15:49:16.930493600Z" level=info msg="StartContainer for \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\" returns successfully" Feb 13 15:49:16.947785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:49:16.948077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:49:16.948148 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:49:16.957812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:49:16.957991 systemd[1]: cri-containerd-1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca.scope: Deactivated successfully. Feb 13 15:49:16.978399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:49:16.999282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca-rootfs.mount: Deactivated successfully. Feb 13 15:49:17.014173 containerd[1496]: time="2025-02-13T15:49:17.014029181Z" level=info msg="shim disconnected" id=1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca namespace=k8s.io Feb 13 15:49:17.014173 containerd[1496]: time="2025-02-13T15:49:17.014089594Z" level=warning msg="cleaning up after shim disconnected" id=1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca namespace=k8s.io Feb 13 15:49:17.014173 containerd[1496]: time="2025-02-13T15:49:17.014098651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:49:17.201416 containerd[1496]: time="2025-02-13T15:49:17.201286922Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:49:17.209250 containerd[1496]: time="2025-02-13T15:49:17.209187887Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:49:17.224259 containerd[1496]: time="2025-02-13T15:49:17.224201694Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:49:17.225438 containerd[1496]: time="2025-02-13T15:49:17.225406135Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.258285706s" Feb 13 15:49:17.225438 containerd[1496]: time="2025-02-13T15:49:17.225436772Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:49:17.227719 containerd[1496]: time="2025-02-13T15:49:17.227672370Z" level=info msg="CreateContainer within sandbox \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:49:17.240852 containerd[1496]: time="2025-02-13T15:49:17.240795066Z" level=info msg="CreateContainer within sandbox \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\"" Feb 13 15:49:17.243558 containerd[1496]: time="2025-02-13T15:49:17.242766818Z" level=info msg="StartContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\"" Feb 13 15:49:17.275685 systemd[1]: Started cri-containerd-83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91.scope - libcontainer container 83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91. Feb 13 15:49:17.303686 containerd[1496]: time="2025-02-13T15:49:17.303610297Z" level=info msg="StartContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" returns successfully" Feb 13 15:49:17.836701 kubelet[2718]: E0213 15:49:17.836661 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:17.839383 kubelet[2718]: E0213 15:49:17.839343 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:17.843244 containerd[1496]: time="2025-02-13T15:49:17.843205083Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:49:17.864187 containerd[1496]: time="2025-02-13T15:49:17.864128054Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\"" Feb 13 15:49:17.866252 containerd[1496]: time="2025-02-13T15:49:17.866213640Z" level=info msg="StartContainer for \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\"" Feb 13 15:49:17.871706 kubelet[2718]: I0213 15:49:17.869118 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fh6vd" podStartSLOduration=3.344472741 podStartE2EDuration="20.869100038s" podCreationTimestamp="2025-02-13 15:48:57 +0000 UTC" firstStartedPulling="2025-02-13 15:48:59.701815774 +0000 UTC m=+17.456147993" lastFinishedPulling="2025-02-13 15:49:17.226443071 +0000 UTC m=+34.980775290" observedRunningTime="2025-02-13 15:49:17.854239259 +0000 UTC m=+35.608571478" watchObservedRunningTime="2025-02-13 15:49:17.869100038 +0000 UTC m=+35.623432257" Feb 13 15:49:17.906806 systemd[1]: Started cri-containerd-fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a.scope - libcontainer container fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a. Feb 13 15:49:17.946625 systemd[1]: cri-containerd-fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a.scope: Deactivated successfully. Feb 13 15:49:18.050939 containerd[1496]: time="2025-02-13T15:49:18.050885149Z" level=info msg="StartContainer for \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\" returns successfully" Feb 13 15:49:18.079655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a-rootfs.mount: Deactivated successfully. Feb 13 15:49:18.163592 containerd[1496]: time="2025-02-13T15:49:18.163386637Z" level=info msg="shim disconnected" id=fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a namespace=k8s.io Feb 13 15:49:18.163592 containerd[1496]: time="2025-02-13T15:49:18.163449024Z" level=warning msg="cleaning up after shim disconnected" id=fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a namespace=k8s.io Feb 13 15:49:18.163592 containerd[1496]: time="2025-02-13T15:49:18.163460566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:49:18.842842 kubelet[2718]: E0213 15:49:18.842810 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:18.844164 kubelet[2718]: E0213 15:49:18.842882 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:18.844479 containerd[1496]: time="2025-02-13T15:49:18.844437277Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:49:19.391159 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:36738.service - OpenSSH per-connection server daemon (10.0.0.1:36738). Feb 13 15:49:19.436662 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 36738 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:19.438288 sshd-session[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:19.442596 systemd-logind[1478]: New session 11 of user core. Feb 13 15:49:19.449659 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:49:19.526019 containerd[1496]: time="2025-02-13T15:49:19.525437289Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\"" Feb 13 15:49:19.526445 containerd[1496]: time="2025-02-13T15:49:19.526424261Z" level=info msg="StartContainer for \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\"" Feb 13 15:49:19.569758 systemd[1]: Started cri-containerd-fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a.scope - libcontainer container fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a. Feb 13 15:49:19.570144 sshd[3371]: Connection closed by 10.0.0.1 port 36738 Feb 13 15:49:19.570625 sshd-session[3369]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:19.574771 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:49:19.576100 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:36738.service: Deactivated successfully. Feb 13 15:49:19.578555 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:49:19.580218 systemd-logind[1478]: Removed session 11. Feb 13 15:49:19.597781 systemd[1]: cri-containerd-fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a.scope: Deactivated successfully. Feb 13 15:49:19.671708 containerd[1496]: time="2025-02-13T15:49:19.671182618Z" level=info msg="StartContainer for \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\" returns successfully" Feb 13 15:49:19.691422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a-rootfs.mount: Deactivated successfully. Feb 13 15:49:19.782869 containerd[1496]: time="2025-02-13T15:49:19.782796586Z" level=info msg="shim disconnected" id=fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a namespace=k8s.io Feb 13 15:49:19.782869 containerd[1496]: time="2025-02-13T15:49:19.782864063Z" level=warning msg="cleaning up after shim disconnected" id=fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a namespace=k8s.io Feb 13 15:49:19.782869 containerd[1496]: time="2025-02-13T15:49:19.782872579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:49:19.940384 kubelet[2718]: E0213 15:49:19.940017 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:19.943570 containerd[1496]: time="2025-02-13T15:49:19.941976233Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:49:20.115912 containerd[1496]: time="2025-02-13T15:49:20.115849460Z" level=info msg="CreateContainer within sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\"" Feb 13 15:49:20.116411 containerd[1496]: time="2025-02-13T15:49:20.116385657Z" level=info msg="StartContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\"" Feb 13 15:49:20.141693 systemd[1]: Started cri-containerd-edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f.scope - libcontainer container edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f. Feb 13 15:49:20.180135 containerd[1496]: time="2025-02-13T15:49:20.180070737Z" level=info msg="StartContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" returns successfully" Feb 13 15:49:20.357255 kubelet[2718]: I0213 15:49:20.357221 2718 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:49:20.378298 kubelet[2718]: I0213 15:49:20.377923 2718 topology_manager.go:215] "Topology Admit Handler" podUID="4eebafcf-e89d-4671-a1c9-8830f4d38d86" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lgth9" Feb 13 15:49:20.378298 kubelet[2718]: I0213 15:49:20.378097 2718 topology_manager.go:215] "Topology Admit Handler" podUID="3661e24e-6567-4c23-bf10-e71f66e3fb30" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qpqjl" Feb 13 15:49:20.387427 systemd[1]: Created slice kubepods-burstable-pod4eebafcf_e89d_4671_a1c9_8830f4d38d86.slice - libcontainer container kubepods-burstable-pod4eebafcf_e89d_4671_a1c9_8830f4d38d86.slice. Feb 13 15:49:20.396053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497889533.mount: Deactivated successfully. Feb 13 15:49:20.401801 systemd[1]: Created slice kubepods-burstable-pod3661e24e_6567_4c23_bf10_e71f66e3fb30.slice - libcontainer container kubepods-burstable-pod3661e24e_6567_4c23_bf10_e71f66e3fb30.slice. Feb 13 15:49:20.448134 kubelet[2718]: I0213 15:49:20.448086 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjqs\" (UniqueName: \"kubernetes.io/projected/3661e24e-6567-4c23-bf10-e71f66e3fb30-kube-api-access-ftjqs\") pod \"coredns-7db6d8ff4d-qpqjl\" (UID: \"3661e24e-6567-4c23-bf10-e71f66e3fb30\") " pod="kube-system/coredns-7db6d8ff4d-qpqjl" Feb 13 15:49:20.448134 kubelet[2718]: I0213 15:49:20.448123 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eebafcf-e89d-4671-a1c9-8830f4d38d86-config-volume\") pod \"coredns-7db6d8ff4d-lgth9\" (UID: \"4eebafcf-e89d-4671-a1c9-8830f4d38d86\") " pod="kube-system/coredns-7db6d8ff4d-lgth9" Feb 13 15:49:20.448134 kubelet[2718]: I0213 15:49:20.448143 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3661e24e-6567-4c23-bf10-e71f66e3fb30-config-volume\") pod \"coredns-7db6d8ff4d-qpqjl\" (UID: \"3661e24e-6567-4c23-bf10-e71f66e3fb30\") " pod="kube-system/coredns-7db6d8ff4d-qpqjl" Feb 13 15:49:20.448392 kubelet[2718]: I0213 15:49:20.448161 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7z92\" (UniqueName: \"kubernetes.io/projected/4eebafcf-e89d-4671-a1c9-8830f4d38d86-kube-api-access-m7z92\") pod \"coredns-7db6d8ff4d-lgth9\" (UID: \"4eebafcf-e89d-4671-a1c9-8830f4d38d86\") " pod="kube-system/coredns-7db6d8ff4d-lgth9" Feb 13 15:49:20.700347 kubelet[2718]: E0213 15:49:20.700247 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:20.701058 containerd[1496]: time="2025-02-13T15:49:20.701024618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgth9,Uid:4eebafcf-e89d-4671-a1c9-8830f4d38d86,Namespace:kube-system,Attempt:0,}" Feb 13 15:49:20.706357 kubelet[2718]: E0213 15:49:20.706307 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:20.706879 containerd[1496]: time="2025-02-13T15:49:20.706825235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qpqjl,Uid:3661e24e-6567-4c23-bf10-e71f66e3fb30,Namespace:kube-system,Attempt:0,}" Feb 13 15:49:20.944589 kubelet[2718]: E0213 15:49:20.944526 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:21.946346 kubelet[2718]: E0213 15:49:21.946305 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:22.841270 systemd-networkd[1410]: cilium_host: Link UP Feb 13 15:49:22.841537 systemd-networkd[1410]: cilium_net: Link UP Feb 13 15:49:22.841851 systemd-networkd[1410]: cilium_net: Gained carrier Feb 13 15:49:22.842078 systemd-networkd[1410]: cilium_host: Gained carrier Feb 13 15:49:22.842258 systemd-networkd[1410]: cilium_host: Gained IPv6LL Feb 13 15:49:22.947887 kubelet[2718]: E0213 15:49:22.947809 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:22.954195 systemd-networkd[1410]: cilium_vxlan: Link UP Feb 13 15:49:22.954204 systemd-networkd[1410]: cilium_vxlan: Gained carrier Feb 13 15:49:23.176778 kernel: NET: Registered PF_ALG protocol family Feb 13 15:49:23.662727 systemd-networkd[1410]: cilium_net: Gained IPv6LL Feb 13 15:49:23.905574 systemd-networkd[1410]: lxc_health: Link UP Feb 13 15:49:23.919572 systemd-networkd[1410]: lxc_health: Gained carrier Feb 13 15:49:24.098244 systemd-networkd[1410]: lxcaab35dbfae68: Link UP Feb 13 15:49:24.110702 kernel: eth0: renamed from tmpbb71b Feb 13 15:49:24.118807 systemd-networkd[1410]: lxcaab35dbfae68: Gained carrier Feb 13 15:49:24.173698 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL Feb 13 15:49:24.231242 systemd-networkd[1410]: lxc53ed72487c18: Link UP Feb 13 15:49:24.240594 kernel: eth0: renamed from tmp10d70 Feb 13 15:49:24.245405 systemd-networkd[1410]: lxc53ed72487c18: Gained carrier Feb 13 15:49:24.534152 kubelet[2718]: E0213 15:49:24.534101 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:24.551756 kubelet[2718]: I0213 15:49:24.551689 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hl9x" podStartSLOduration=12.20428728 podStartE2EDuration="27.551665584s" podCreationTimestamp="2025-02-13 15:48:57 +0000 UTC" firstStartedPulling="2025-02-13 15:48:59.619461878 +0000 UTC m=+17.373794097" lastFinishedPulling="2025-02-13 15:49:14.966840182 +0000 UTC m=+32.721172401" observedRunningTime="2025-02-13 15:49:21.124084131 +0000 UTC m=+38.878416360" watchObservedRunningTime="2025-02-13 15:49:24.551665584 +0000 UTC m=+42.305997803" Feb 13 15:49:24.594360 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:36754.service - OpenSSH per-connection server daemon (10.0.0.1:36754). Feb 13 15:49:24.637257 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 36754 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:24.639228 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:24.647826 systemd-logind[1478]: New session 12 of user core. Feb 13 15:49:24.655926 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:49:24.795717 sshd[3955]: Connection closed by 10.0.0.1 port 36754 Feb 13 15:49:24.797830 sshd-session[3953]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:24.801397 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:36754.service: Deactivated successfully. Feb 13 15:49:24.804145 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:49:24.806332 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:49:24.807430 systemd-logind[1478]: Removed session 12. Feb 13 15:49:24.952398 kubelet[2718]: E0213 15:49:24.952351 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:25.773826 systemd-networkd[1410]: lxc_health: Gained IPv6LL Feb 13 15:49:25.953902 kubelet[2718]: E0213 15:49:25.953860 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:25.965690 systemd-networkd[1410]: lxcaab35dbfae68: Gained IPv6LL Feb 13 15:49:26.029725 systemd-networkd[1410]: lxc53ed72487c18: Gained IPv6LL Feb 13 15:49:28.192870 containerd[1496]: time="2025-02-13T15:49:28.192791506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:49:28.192870 containerd[1496]: time="2025-02-13T15:49:28.192848814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:49:28.192870 containerd[1496]: time="2025-02-13T15:49:28.192862700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:49:28.193377 containerd[1496]: time="2025-02-13T15:49:28.192949483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:49:28.217730 systemd[1]: Started cri-containerd-bb71b82c5512b4398094bb22072a19dc6733245387eec2c79af3015cdf9397c4.scope - libcontainer container bb71b82c5512b4398094bb22072a19dc6733245387eec2c79af3015cdf9397c4. Feb 13 15:49:28.231738 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:49:28.259678 containerd[1496]: time="2025-02-13T15:49:28.259530724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:49:28.259678 containerd[1496]: time="2025-02-13T15:49:28.259628597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:49:28.260051 containerd[1496]: time="2025-02-13T15:49:28.259673742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:49:28.260051 containerd[1496]: time="2025-02-13T15:49:28.259789410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:49:28.263675 containerd[1496]: time="2025-02-13T15:49:28.263312380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qpqjl,Uid:3661e24e-6567-4c23-bf10-e71f66e3fb30,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb71b82c5512b4398094bb22072a19dc6733245387eec2c79af3015cdf9397c4\"" Feb 13 15:49:28.267559 kubelet[2718]: E0213 15:49:28.267476 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:28.273140 containerd[1496]: time="2025-02-13T15:49:28.272833966Z" level=info msg="CreateContainer within sandbox \"bb71b82c5512b4398094bb22072a19dc6733245387eec2c79af3015cdf9397c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:49:28.292708 systemd[1]: Started cri-containerd-10d70fd828cf51210205cb5e9bbefcdfd3cc3389835969a5ca2eeba939cb7943.scope - libcontainer container 10d70fd828cf51210205cb5e9bbefcdfd3cc3389835969a5ca2eeba939cb7943. Feb 13 15:49:28.304911 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:49:28.328728 containerd[1496]: time="2025-02-13T15:49:28.328689973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgth9,Uid:4eebafcf-e89d-4671-a1c9-8830f4d38d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d70fd828cf51210205cb5e9bbefcdfd3cc3389835969a5ca2eeba939cb7943\"" Feb 13 15:49:28.329402 kubelet[2718]: E0213 15:49:28.329356 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:28.331309 containerd[1496]: time="2025-02-13T15:49:28.331172370Z" level=info msg="CreateContainer within sandbox \"10d70fd828cf51210205cb5e9bbefcdfd3cc3389835969a5ca2eeba939cb7943\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:49:28.917668 containerd[1496]: time="2025-02-13T15:49:28.917613712Z" level=info msg="CreateContainer within sandbox \"bb71b82c5512b4398094bb22072a19dc6733245387eec2c79af3015cdf9397c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"486772ba25a8a761f40ebfc55a9ddacb68541c92731c9616b023bd66428b7af1\"" Feb 13 15:49:28.918191 containerd[1496]: time="2025-02-13T15:49:28.918154226Z" level=info msg="StartContainer for \"486772ba25a8a761f40ebfc55a9ddacb68541c92731c9616b023bd66428b7af1\"" Feb 13 15:49:28.949759 systemd[1]: Started cri-containerd-486772ba25a8a761f40ebfc55a9ddacb68541c92731c9616b023bd66428b7af1.scope - libcontainer container 486772ba25a8a761f40ebfc55a9ddacb68541c92731c9616b023bd66428b7af1. Feb 13 15:49:28.966261 containerd[1496]: time="2025-02-13T15:49:28.966044436Z" level=info msg="CreateContainer within sandbox \"10d70fd828cf51210205cb5e9bbefcdfd3cc3389835969a5ca2eeba939cb7943\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4b7b7cb45bb1b3e5328154d23e15cce6eb944ef32cae31cc24c435a3d534c0c\"" Feb 13 15:49:28.968125 containerd[1496]: time="2025-02-13T15:49:28.968093591Z" level=info msg="StartContainer for \"f4b7b7cb45bb1b3e5328154d23e15cce6eb944ef32cae31cc24c435a3d534c0c\"" Feb 13 15:49:28.994702 systemd[1]: Started cri-containerd-f4b7b7cb45bb1b3e5328154d23e15cce6eb944ef32cae31cc24c435a3d534c0c.scope - libcontainer container f4b7b7cb45bb1b3e5328154d23e15cce6eb944ef32cae31cc24c435a3d534c0c. Feb 13 15:49:29.080470 containerd[1496]: time="2025-02-13T15:49:29.080416795Z" level=info msg="StartContainer for \"f4b7b7cb45bb1b3e5328154d23e15cce6eb944ef32cae31cc24c435a3d534c0c\" returns successfully" Feb 13 15:49:29.080653 containerd[1496]: time="2025-02-13T15:49:29.080433988Z" level=info msg="StartContainer for \"486772ba25a8a761f40ebfc55a9ddacb68541c92731c9616b023bd66428b7af1\" returns successfully" Feb 13 15:49:29.810868 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:57356.service - OpenSSH per-connection server daemon (10.0.0.1:57356). Feb 13 15:49:29.851051 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 57356 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:29.852614 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:29.856617 systemd-logind[1478]: New session 13 of user core. Feb 13 15:49:29.870702 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:49:29.962723 kubelet[2718]: E0213 15:49:29.962689 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:29.966374 kubelet[2718]: E0213 15:49:29.966316 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:30.153596 kubelet[2718]: I0213 15:49:30.152976 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lgth9" podStartSLOduration=33.152958886 podStartE2EDuration="33.152958886s" podCreationTimestamp="2025-02-13 15:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:49:30.152629558 +0000 UTC m=+47.906961777" watchObservedRunningTime="2025-02-13 15:49:30.152958886 +0000 UTC m=+47.907291105" Feb 13 15:49:30.158102 sshd[4142]: Connection closed by 10.0.0.1 port 57356 Feb 13 15:49:30.158455 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:30.162682 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:57356.service: Deactivated successfully. Feb 13 15:49:30.164513 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:49:30.165272 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:49:30.166118 systemd-logind[1478]: Removed session 13. Feb 13 15:49:30.506215 kubelet[2718]: I0213 15:49:30.505648 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qpqjl" podStartSLOduration=33.505630524 podStartE2EDuration="33.505630524s" podCreationTimestamp="2025-02-13 15:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:49:30.505200999 +0000 UTC m=+48.259533218" watchObservedRunningTime="2025-02-13 15:49:30.505630524 +0000 UTC m=+48.259962744" Feb 13 15:49:30.968970 kubelet[2718]: E0213 15:49:30.968934 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:30.969404 kubelet[2718]: E0213 15:49:30.969155 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:31.970641 kubelet[2718]: E0213 15:49:31.970584 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:31.971207 kubelet[2718]: E0213 15:49:31.970778 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:49:35.170797 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:57358.service - OpenSSH per-connection server daemon (10.0.0.1:57358). Feb 13 15:49:35.214079 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 57358 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:35.215537 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:35.220159 systemd-logind[1478]: New session 14 of user core. Feb 13 15:49:35.236683 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:49:35.349961 sshd[4170]: Connection closed by 10.0.0.1 port 57358 Feb 13 15:49:35.350352 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:35.354453 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:57358.service: Deactivated successfully. Feb 13 15:49:35.356393 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:49:35.357025 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:49:35.357916 systemd-logind[1478]: Removed session 14. Feb 13 15:49:40.365942 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:53260.service - OpenSSH per-connection server daemon (10.0.0.1:53260). Feb 13 15:49:40.404359 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 53260 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:40.406003 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:40.410608 systemd-logind[1478]: New session 15 of user core. Feb 13 15:49:40.418765 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:49:40.532943 sshd[4185]: Connection closed by 10.0.0.1 port 53260 Feb 13 15:49:40.533346 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:40.541915 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:53260.service: Deactivated successfully. Feb 13 15:49:40.543804 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:49:40.545663 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:49:40.552951 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:53266.service - OpenSSH per-connection server daemon (10.0.0.1:53266). Feb 13 15:49:40.553911 systemd-logind[1478]: Removed session 15. Feb 13 15:49:40.590185 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 53266 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:40.591843 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:40.595940 systemd-logind[1478]: New session 16 of user core. Feb 13 15:49:40.607706 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:49:40.756075 sshd[4200]: Connection closed by 10.0.0.1 port 53266 Feb 13 15:49:40.757273 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:40.767785 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:53266.service: Deactivated successfully. Feb 13 15:49:40.769775 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:49:40.773435 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:49:40.781863 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:53280.service - OpenSSH per-connection server daemon (10.0.0.1:53280). Feb 13 15:49:40.782808 systemd-logind[1478]: Removed session 16. Feb 13 15:49:40.814062 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 53280 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:40.815721 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:40.820129 systemd-logind[1478]: New session 17 of user core. Feb 13 15:49:40.828730 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:49:40.940447 sshd[4212]: Connection closed by 10.0.0.1 port 53280 Feb 13 15:49:40.940865 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:40.944923 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:53280.service: Deactivated successfully. Feb 13 15:49:40.947087 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:49:40.947901 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:49:40.948844 systemd-logind[1478]: Removed session 17. Feb 13 15:49:45.952455 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:53296.service - OpenSSH per-connection server daemon (10.0.0.1:53296). Feb 13 15:49:45.988098 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 53296 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:45.989408 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:45.992999 systemd-logind[1478]: New session 18 of user core. Feb 13 15:49:46.003330 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:49:46.122537 sshd[4228]: Connection closed by 10.0.0.1 port 53296 Feb 13 15:49:46.122895 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:46.126565 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:53296.service: Deactivated successfully. Feb 13 15:49:46.128398 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:49:46.129003 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:49:46.129862 systemd-logind[1478]: Removed session 18. Feb 13 15:49:51.138273 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:56032.service - OpenSSH per-connection server daemon (10.0.0.1:56032). Feb 13 15:49:51.174173 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 56032 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:51.175494 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:51.179069 systemd-logind[1478]: New session 19 of user core. Feb 13 15:49:51.188645 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:49:51.290328 sshd[4243]: Connection closed by 10.0.0.1 port 56032 Feb 13 15:49:51.290834 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:51.303235 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:56032.service: Deactivated successfully. Feb 13 15:49:51.305036 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:49:51.306531 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:49:51.315828 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:56036.service - OpenSSH per-connection server daemon (10.0.0.1:56036). Feb 13 15:49:51.317109 systemd-logind[1478]: Removed session 19. Feb 13 15:49:51.348238 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 56036 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:51.349572 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:51.353266 systemd-logind[1478]: New session 20 of user core. Feb 13 15:49:51.363677 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:49:51.585015 sshd[4257]: Connection closed by 10.0.0.1 port 56036 Feb 13 15:49:51.585469 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:51.596239 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:56036.service: Deactivated successfully. Feb 13 15:49:51.597953 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:49:51.599371 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:49:51.600762 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Feb 13 15:49:51.601658 systemd-logind[1478]: Removed session 20. Feb 13 15:49:51.640850 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:51.642183 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:51.646019 systemd-logind[1478]: New session 21 of user core. Feb 13 15:49:51.654658 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:49:53.685477 sshd[4269]: Connection closed by 10.0.0.1 port 56042 Feb 13 15:49:53.685964 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:53.701157 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:56042.service: Deactivated successfully. Feb 13 15:49:53.702811 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:49:53.704094 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:49:53.705299 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:56058.service - OpenSSH per-connection server daemon (10.0.0.1:56058). Feb 13 15:49:53.706078 systemd-logind[1478]: Removed session 21. Feb 13 15:49:53.741893 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:53.743236 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:53.746995 systemd-logind[1478]: New session 22 of user core. Feb 13 15:49:53.756651 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:49:54.321180 sshd[4291]: Connection closed by 10.0.0.1 port 56058 Feb 13 15:49:54.321655 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:54.332263 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:56058.service: Deactivated successfully. Feb 13 15:49:54.335629 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:49:54.338309 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:49:54.347069 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:56062.service - OpenSSH per-connection server daemon (10.0.0.1:56062). Feb 13 15:49:54.348253 systemd-logind[1478]: Removed session 22. Feb 13 15:49:54.378473 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 56062 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:54.379992 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:54.383899 systemd-logind[1478]: New session 23 of user core. Feb 13 15:49:54.394668 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:49:54.505144 sshd[4303]: Connection closed by 10.0.0.1 port 56062 Feb 13 15:49:54.505518 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:54.510243 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:56062.service: Deactivated successfully. Feb 13 15:49:54.512519 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:49:54.513219 systemd-logind[1478]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:49:54.514073 systemd-logind[1478]: Removed session 23. Feb 13 15:49:59.516581 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:39530.service - OpenSSH per-connection server daemon (10.0.0.1:39530). Feb 13 15:49:59.558334 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 39530 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:49:59.559788 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:49:59.563653 systemd-logind[1478]: New session 24 of user core. Feb 13 15:49:59.571666 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:49:59.681534 sshd[4318]: Connection closed by 10.0.0.1 port 39530 Feb 13 15:49:59.681939 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Feb 13 15:49:59.686117 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:39530.service: Deactivated successfully. Feb 13 15:49:59.688206 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:49:59.688915 systemd-logind[1478]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:49:59.689946 systemd-logind[1478]: Removed session 24. Feb 13 15:50:03.335949 kubelet[2718]: E0213 15:50:03.335896 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:04.336040 kubelet[2718]: E0213 15:50:04.335996 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:04.693632 systemd[1]: Started sshd@24-10.0.0.60:22-10.0.0.1:39542.service - OpenSSH per-connection server daemon (10.0.0.1:39542). Feb 13 15:50:04.731917 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:04.733516 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:04.738003 systemd-logind[1478]: New session 25 of user core. Feb 13 15:50:04.748700 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:50:04.858067 sshd[4337]: Connection closed by 10.0.0.1 port 39542 Feb 13 15:50:04.858433 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:04.862244 systemd[1]: sshd@24-10.0.0.60:22-10.0.0.1:39542.service: Deactivated successfully. Feb 13 15:50:04.864147 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:50:04.864779 systemd-logind[1478]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:50:04.865759 systemd-logind[1478]: Removed session 25. Feb 13 15:50:09.870842 systemd[1]: Started sshd@25-10.0.0.60:22-10.0.0.1:36930.service - OpenSSH per-connection server daemon (10.0.0.1:36930). Feb 13 15:50:09.916399 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 36930 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:09.918407 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:09.922463 systemd-logind[1478]: New session 26 of user core. Feb 13 15:50:09.936715 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:50:10.044663 sshd[4351]: Connection closed by 10.0.0.1 port 36930 Feb 13 15:50:10.045015 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:10.049323 systemd[1]: sshd@25-10.0.0.60:22-10.0.0.1:36930.service: Deactivated successfully. Feb 13 15:50:10.051587 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:50:10.052317 systemd-logind[1478]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:50:10.053399 systemd-logind[1478]: Removed session 26. Feb 13 15:50:11.335919 kubelet[2718]: E0213 15:50:11.335878 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:12.336243 kubelet[2718]: E0213 15:50:12.336180 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:15.056650 systemd[1]: Started sshd@26-10.0.0.60:22-10.0.0.1:36938.service - OpenSSH per-connection server daemon (10.0.0.1:36938). Feb 13 15:50:15.093780 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:15.095177 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:15.099098 systemd-logind[1478]: New session 27 of user core. Feb 13 15:50:15.109688 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:50:15.212489 sshd[4366]: Connection closed by 10.0.0.1 port 36938 Feb 13 15:50:15.212855 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:15.216605 systemd[1]: sshd@26-10.0.0.60:22-10.0.0.1:36938.service: Deactivated successfully. Feb 13 15:50:15.218399 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:50:15.219000 systemd-logind[1478]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:50:15.219876 systemd-logind[1478]: Removed session 27. Feb 13 15:50:20.224209 systemd[1]: Started sshd@27-10.0.0.60:22-10.0.0.1:33730.service - OpenSSH per-connection server daemon (10.0.0.1:33730). Feb 13 15:50:20.261032 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 33730 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:20.262672 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:20.266791 systemd-logind[1478]: New session 28 of user core. Feb 13 15:50:20.276688 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:50:20.378613 sshd[4381]: Connection closed by 10.0.0.1 port 33730 Feb 13 15:50:20.378987 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:20.393504 systemd[1]: sshd@27-10.0.0.60:22-10.0.0.1:33730.service: Deactivated successfully. Feb 13 15:50:20.395560 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:50:20.397163 systemd-logind[1478]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:50:20.405960 systemd[1]: Started sshd@28-10.0.0.60:22-10.0.0.1:33746.service - OpenSSH per-connection server daemon (10.0.0.1:33746). Feb 13 15:50:20.406892 systemd-logind[1478]: Removed session 28. Feb 13 15:50:20.437504 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 33746 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:20.439094 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:20.443121 systemd-logind[1478]: New session 29 of user core. Feb 13 15:50:20.447675 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:50:22.045965 containerd[1496]: time="2025-02-13T15:50:22.045907297Z" level=info msg="StopContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" with timeout 30 (s)" Feb 13 15:50:22.048250 containerd[1496]: time="2025-02-13T15:50:22.046362709Z" level=info msg="Stop container \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" with signal terminated" Feb 13 15:50:22.099223 systemd[1]: run-containerd-runc-k8s.io-edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f-runc.YO78vS.mount: Deactivated successfully. Feb 13 15:50:22.100971 systemd[1]: cri-containerd-83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91.scope: Deactivated successfully. Feb 13 15:50:22.136046 containerd[1496]: time="2025-02-13T15:50:22.135882399Z" level=info msg="StopContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" with timeout 2 (s)" Feb 13 15:50:22.140214 containerd[1496]: time="2025-02-13T15:50:22.138625632Z" level=info msg="Stop container \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" with signal terminated" Feb 13 15:50:22.145202 containerd[1496]: time="2025-02-13T15:50:22.143723674Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:50:22.144798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91-rootfs.mount: Deactivated successfully. Feb 13 15:50:22.171621 systemd-networkd[1410]: lxc_health: Link DOWN Feb 13 15:50:22.171752 systemd-networkd[1410]: lxc_health: Lost carrier Feb 13 15:50:22.187412 containerd[1496]: time="2025-02-13T15:50:22.187291923Z" level=info msg="shim disconnected" id=83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91 namespace=k8s.io Feb 13 15:50:22.187412 containerd[1496]: time="2025-02-13T15:50:22.187408364Z" level=warning msg="cleaning up after shim disconnected" id=83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91 namespace=k8s.io Feb 13 15:50:22.187656 containerd[1496]: time="2025-02-13T15:50:22.187422500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:22.235270 systemd[1]: cri-containerd-edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f.scope: Deactivated successfully. Feb 13 15:50:22.235676 systemd[1]: cri-containerd-edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f.scope: Consumed 7.180s CPU time. Feb 13 15:50:22.240185 containerd[1496]: time="2025-02-13T15:50:22.240116787Z" level=info msg="StopContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" returns successfully" Feb 13 15:50:22.247359 containerd[1496]: time="2025-02-13T15:50:22.247322488Z" level=info msg="StopPodSandbox for \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\"" Feb 13 15:50:22.248785 containerd[1496]: time="2025-02-13T15:50:22.247536704Z" level=info msg="Container to stop \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.254774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263-shm.mount: Deactivated successfully. Feb 13 15:50:22.268172 systemd[1]: cri-containerd-9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263.scope: Deactivated successfully. Feb 13 15:50:22.281861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f-rootfs.mount: Deactivated successfully. Feb 13 15:50:22.294701 containerd[1496]: time="2025-02-13T15:50:22.294642770Z" level=info msg="shim disconnected" id=edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f namespace=k8s.io Feb 13 15:50:22.295080 containerd[1496]: time="2025-02-13T15:50:22.294926779Z" level=warning msg="cleaning up after shim disconnected" id=edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f namespace=k8s.io Feb 13 15:50:22.295080 containerd[1496]: time="2025-02-13T15:50:22.294942568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:22.339273 containerd[1496]: time="2025-02-13T15:50:22.339165507Z" level=info msg="StopContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" returns successfully" Feb 13 15:50:22.339866 containerd[1496]: time="2025-02-13T15:50:22.339845465Z" level=info msg="StopPodSandbox for \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\"" Feb 13 15:50:22.340035 containerd[1496]: time="2025-02-13T15:50:22.339974809Z" level=info msg="Container to stop \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.340035 containerd[1496]: time="2025-02-13T15:50:22.340025786Z" level=info msg="Container to stop \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.340141 containerd[1496]: time="2025-02-13T15:50:22.340039922Z" level=info msg="Container to stop \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.340141 containerd[1496]: time="2025-02-13T15:50:22.340051033Z" level=info msg="Container to stop \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.340141 containerd[1496]: time="2025-02-13T15:50:22.340067205Z" level=info msg="Container to stop \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:50:22.346735 containerd[1496]: time="2025-02-13T15:50:22.346684631Z" level=info msg="shim disconnected" id=9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263 namespace=k8s.io Feb 13 15:50:22.346956 containerd[1496]: time="2025-02-13T15:50:22.346755044Z" level=warning msg="cleaning up after shim disconnected" id=9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263 namespace=k8s.io Feb 13 15:50:22.346956 containerd[1496]: time="2025-02-13T15:50:22.346771176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:22.367400 systemd[1]: cri-containerd-c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9.scope: Deactivated successfully. Feb 13 15:50:22.388266 containerd[1496]: time="2025-02-13T15:50:22.387386334Z" level=info msg="TearDown network for sandbox \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\" successfully" Feb 13 15:50:22.388266 containerd[1496]: time="2025-02-13T15:50:22.387417734Z" level=info msg="StopPodSandbox for \"9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263\" returns successfully" Feb 13 15:50:22.407513 kubelet[2718]: E0213 15:50:22.407223 2718 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:50:22.433098 containerd[1496]: time="2025-02-13T15:50:22.433025965Z" level=info msg="shim disconnected" id=c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9 namespace=k8s.io Feb 13 15:50:22.433098 containerd[1496]: time="2025-02-13T15:50:22.433092190Z" level=warning msg="cleaning up after shim disconnected" id=c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9 namespace=k8s.io Feb 13 15:50:22.433098 containerd[1496]: time="2025-02-13T15:50:22.433101678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:22.458015 containerd[1496]: time="2025-02-13T15:50:22.457963325Z" level=info msg="TearDown network for sandbox \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" successfully" Feb 13 15:50:22.458015 containerd[1496]: time="2025-02-13T15:50:22.458005024Z" level=info msg="StopPodSandbox for \"c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9\" returns successfully" Feb 13 15:50:22.562266 kubelet[2718]: I0213 15:50:22.562181 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfa03493-0fcc-4823-a50a-f1211ddf3e96-clustermesh-secrets\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562266 kubelet[2718]: I0213 15:50:22.562243 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-config-path\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562266 kubelet[2718]: I0213 15:50:22.562266 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-etc-cni-netd\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562266 kubelet[2718]: I0213 15:50:22.562289 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-run\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562306 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-cgroup\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562323 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-xtables-lock\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562349 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hwcv\" (UniqueName: \"kubernetes.io/projected/d4048bd4-6051-425b-a63c-aa9843d3cf79-kube-api-access-6hwcv\") pod \"d4048bd4-6051-425b-a63c-aa9843d3cf79\" (UID: \"d4048bd4-6051-425b-a63c-aa9843d3cf79\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562371 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hubble-tls\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562392 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hostproc\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562639 kubelet[2718]: I0213 15:50:22.562408 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-net\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562430 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4048bd4-6051-425b-a63c-aa9843d3cf79-cilium-config-path\") pod \"d4048bd4-6051-425b-a63c-aa9843d3cf79\" (UID: \"d4048bd4-6051-425b-a63c-aa9843d3cf79\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562451 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-lib-modules\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562467 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-bpf-maps\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562489 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cni-path\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562512 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk9dq\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-kube-api-access-dk9dq\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.562838 kubelet[2718]: I0213 15:50:22.562530 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-kernel\") pod \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\" (UID: \"bfa03493-0fcc-4823-a50a-f1211ddf3e96\") " Feb 13 15:50:22.563104 kubelet[2718]: I0213 15:50:22.562622 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.563304 kubelet[2718]: I0213 15:50:22.563242 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hostproc" (OuterVolumeSpecName: "hostproc") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.563304 kubelet[2718]: I0213 15:50:22.563320 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.563488 kubelet[2718]: I0213 15:50:22.563340 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.563488 kubelet[2718]: I0213 15:50:22.563358 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.563488 kubelet[2718]: I0213 15:50:22.563381 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.567268 kubelet[2718]: I0213 15:50:22.566213 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.567268 kubelet[2718]: I0213 15:50:22.566269 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.567268 kubelet[2718]: I0213 15:50:22.566833 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4048bd4-6051-425b-a63c-aa9843d3cf79-kube-api-access-6hwcv" (OuterVolumeSpecName: "kube-api-access-6hwcv") pod "d4048bd4-6051-425b-a63c-aa9843d3cf79" (UID: "d4048bd4-6051-425b-a63c-aa9843d3cf79"). InnerVolumeSpecName "kube-api-access-6hwcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:50:22.567268 kubelet[2718]: I0213 15:50:22.567170 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cni-path" (OuterVolumeSpecName: "cni-path") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.567268 kubelet[2718]: I0213 15:50:22.567193 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:50:22.567615 kubelet[2718]: I0213 15:50:22.567464 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa03493-0fcc-4823-a50a-f1211ddf3e96-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:50:22.567968 kubelet[2718]: I0213 15:50:22.567938 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:50:22.570949 kubelet[2718]: I0213 15:50:22.570902 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:50:22.571612 kubelet[2718]: I0213 15:50:22.571284 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4048bd4-6051-425b-a63c-aa9843d3cf79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4048bd4-6051-425b-a63c-aa9843d3cf79" (UID: "d4048bd4-6051-425b-a63c-aa9843d3cf79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:50:22.571612 kubelet[2718]: I0213 15:50:22.571372 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-kube-api-access-dk9dq" (OuterVolumeSpecName: "kube-api-access-dk9dq") pod "bfa03493-0fcc-4823-a50a-f1211ddf3e96" (UID: "bfa03493-0fcc-4823-a50a-f1211ddf3e96"). InnerVolumeSpecName "kube-api-access-dk9dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663656 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663683 2718 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663693 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6hwcv\" (UniqueName: \"kubernetes.io/projected/d4048bd4-6051-425b-a63c-aa9843d3cf79-kube-api-access-6hwcv\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663706 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663716 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663725 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4048bd4-6051-425b-a63c-aa9843d3cf79-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663735 2718 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.663783 kubelet[2718]: I0213 15:50:22.663743 2718 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663752 2718 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663759 2718 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663769 2718 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663777 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dk9dq\" (UniqueName: \"kubernetes.io/projected/bfa03493-0fcc-4823-a50a-f1211ddf3e96-kube-api-access-dk9dq\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663786 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663794 2718 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfa03493-0fcc-4823-a50a-f1211ddf3e96-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663803 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa03493-0fcc-4823-a50a-f1211ddf3e96-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:22.664084 kubelet[2718]: I0213 15:50:22.663811 2718 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfa03493-0fcc-4823-a50a-f1211ddf3e96-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:50:23.067908 kubelet[2718]: I0213 15:50:23.067878 2718 scope.go:117] "RemoveContainer" containerID="edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f" Feb 13 15:50:23.068893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c3b5d00fb1b77b77f7248dff7bf5c611e2c073343422e49738ab0988b15f263-rootfs.mount: Deactivated successfully. Feb 13 15:50:23.069019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9-rootfs.mount: Deactivated successfully. Feb 13 15:50:23.069097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7f2918ff081cf67ba90f21390e5f0eceb381854958b25676c9db86a388c03c9-shm.mount: Deactivated successfully. Feb 13 15:50:23.069193 systemd[1]: var-lib-kubelet-pods-d4048bd4\x2d6051\x2d425b\x2da63c\x2daa9843d3cf79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hwcv.mount: Deactivated successfully. Feb 13 15:50:23.069278 systemd[1]: var-lib-kubelet-pods-bfa03493\x2d0fcc\x2d4823\x2da50a\x2df1211ddf3e96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddk9dq.mount: Deactivated successfully. Feb 13 15:50:23.069355 systemd[1]: var-lib-kubelet-pods-bfa03493\x2d0fcc\x2d4823\x2da50a\x2df1211ddf3e96-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:50:23.069430 systemd[1]: var-lib-kubelet-pods-bfa03493\x2d0fcc\x2d4823\x2da50a\x2df1211ddf3e96-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:50:23.077843 systemd[1]: Removed slice kubepods-burstable-podbfa03493_0fcc_4823_a50a_f1211ddf3e96.slice - libcontainer container kubepods-burstable-podbfa03493_0fcc_4823_a50a_f1211ddf3e96.slice. Feb 13 15:50:23.078033 systemd[1]: kubepods-burstable-podbfa03493_0fcc_4823_a50a_f1211ddf3e96.slice: Consumed 7.289s CPU time. Feb 13 15:50:23.079369 systemd[1]: Removed slice kubepods-besteffort-podd4048bd4_6051_425b_a63c_aa9843d3cf79.slice - libcontainer container kubepods-besteffort-podd4048bd4_6051_425b_a63c_aa9843d3cf79.slice. Feb 13 15:50:23.079632 containerd[1496]: time="2025-02-13T15:50:23.079468896Z" level=info msg="RemoveContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\"" Feb 13 15:50:23.085956 containerd[1496]: time="2025-02-13T15:50:23.085911440Z" level=info msg="RemoveContainer for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" returns successfully" Feb 13 15:50:23.086239 kubelet[2718]: I0213 15:50:23.086212 2718 scope.go:117] "RemoveContainer" containerID="fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a" Feb 13 15:50:23.087880 containerd[1496]: time="2025-02-13T15:50:23.087830563Z" level=info msg="RemoveContainer for \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\"" Feb 13 15:50:23.091615 containerd[1496]: time="2025-02-13T15:50:23.091589557Z" level=info msg="RemoveContainer for \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\" returns successfully" Feb 13 15:50:23.092368 kubelet[2718]: I0213 15:50:23.091747 2718 scope.go:117] "RemoveContainer" containerID="fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a" Feb 13 15:50:23.092525 containerd[1496]: time="2025-02-13T15:50:23.092500081Z" level=info msg="RemoveContainer for \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\"" Feb 13 15:50:23.095946 containerd[1496]: time="2025-02-13T15:50:23.095901599Z" level=info msg="RemoveContainer for \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\" returns successfully" Feb 13 15:50:23.096161 kubelet[2718]: I0213 15:50:23.096121 2718 scope.go:117] "RemoveContainer" containerID="1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca" Feb 13 15:50:23.097532 containerd[1496]: time="2025-02-13T15:50:23.097265100Z" level=info msg="RemoveContainer for \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\"" Feb 13 15:50:23.101010 containerd[1496]: time="2025-02-13T15:50:23.100830408Z" level=info msg="RemoveContainer for \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\" returns successfully" Feb 13 15:50:23.101070 kubelet[2718]: I0213 15:50:23.100992 2718 scope.go:117] "RemoveContainer" containerID="b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b" Feb 13 15:50:23.102284 containerd[1496]: time="2025-02-13T15:50:23.102232182Z" level=info msg="RemoveContainer for \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\"" Feb 13 15:50:23.105711 containerd[1496]: time="2025-02-13T15:50:23.105681550Z" level=info msg="RemoveContainer for \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\" returns successfully" Feb 13 15:50:23.105880 kubelet[2718]: I0213 15:50:23.105849 2718 scope.go:117] "RemoveContainer" containerID="edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f" Feb 13 15:50:23.106038 containerd[1496]: time="2025-02-13T15:50:23.106003570Z" level=error msg="ContainerStatus for \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\": not found" Feb 13 15:50:23.112826 kubelet[2718]: E0213 15:50:23.112786 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\": not found" containerID="edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f" Feb 13 15:50:23.112960 kubelet[2718]: I0213 15:50:23.112829 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f"} err="failed to get container status \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"edc3af1118aa02769d8bba33d28401e48adad42511b1d259d544c63fd95c9b0f\": not found" Feb 13 15:50:23.112960 kubelet[2718]: I0213 15:50:23.112937 2718 scope.go:117] "RemoveContainer" containerID="fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a" Feb 13 15:50:23.113162 containerd[1496]: time="2025-02-13T15:50:23.113111874Z" level=error msg="ContainerStatus for \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\": not found" Feb 13 15:50:23.113311 kubelet[2718]: E0213 15:50:23.113277 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\": not found" containerID="fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a" Feb 13 15:50:23.113355 kubelet[2718]: I0213 15:50:23.113305 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a"} err="failed to get container status \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\": rpc error: code = NotFound desc = an error occurred when try to find container \"fef3b24dab48cf0e4e596b6fd2742a512b160fcb16d7408b469305e32a69890a\": not found" Feb 13 15:50:23.113388 kubelet[2718]: I0213 15:50:23.113358 2718 scope.go:117] "RemoveContainer" containerID="fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a" Feb 13 15:50:23.113557 containerd[1496]: time="2025-02-13T15:50:23.113510258Z" level=error msg="ContainerStatus for \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\": not found" Feb 13 15:50:23.113710 kubelet[2718]: E0213 15:50:23.113670 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\": not found" containerID="fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a" Feb 13 15:50:23.113710 kubelet[2718]: I0213 15:50:23.113716 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a"} err="failed to get container status \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\": rpc error: code = NotFound desc = an error occurred when try to find container \"fccd53c427c418fcee817f22be939397bb2cffbfebe17b78410cb12245a0b80a\": not found" Feb 13 15:50:23.113828 kubelet[2718]: I0213 15:50:23.113746 2718 scope.go:117] "RemoveContainer" containerID="1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca" Feb 13 15:50:23.113970 containerd[1496]: time="2025-02-13T15:50:23.113934040Z" level=error msg="ContainerStatus for \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\": not found" Feb 13 15:50:23.114097 kubelet[2718]: E0213 15:50:23.114072 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\": not found" containerID="1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca" Feb 13 15:50:23.114168 kubelet[2718]: I0213 15:50:23.114099 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca"} err="failed to get container status \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b9b68bf4490f3de2cafe6d8828dc87d9bcf162de6c708c8034086658748d0ca\": not found" Feb 13 15:50:23.114168 kubelet[2718]: I0213 15:50:23.114117 2718 scope.go:117] "RemoveContainer" containerID="b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b" Feb 13 15:50:23.114326 containerd[1496]: time="2025-02-13T15:50:23.114294011Z" level=error msg="ContainerStatus for \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\": not found" Feb 13 15:50:23.114440 kubelet[2718]: E0213 15:50:23.114416 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\": not found" containerID="b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b" Feb 13 15:50:23.114490 kubelet[2718]: I0213 15:50:23.114444 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b"} err="failed to get container status \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b599b0bdb323449c98e5ad97701120f4cfd3f977fc0b19a0a557559c1737348b\": not found" Feb 13 15:50:23.114490 kubelet[2718]: I0213 15:50:23.114462 2718 scope.go:117] "RemoveContainer" containerID="83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91" Feb 13 15:50:23.115866 containerd[1496]: time="2025-02-13T15:50:23.115827354Z" level=info msg="RemoveContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\"" Feb 13 15:50:23.119614 containerd[1496]: time="2025-02-13T15:50:23.119586559Z" level=info msg="RemoveContainer for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" returns successfully" Feb 13 15:50:23.119772 kubelet[2718]: I0213 15:50:23.119742 2718 scope.go:117] "RemoveContainer" containerID="83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91" Feb 13 15:50:23.120024 containerd[1496]: time="2025-02-13T15:50:23.119983430Z" level=error msg="ContainerStatus for \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\": not found" Feb 13 15:50:23.120166 kubelet[2718]: E0213 15:50:23.120130 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\": not found" containerID="83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91" Feb 13 15:50:23.120229 kubelet[2718]: I0213 15:50:23.120163 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91"} err="failed to get container status \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\": rpc error: code = NotFound desc = an error occurred when try to find container \"83463fedd9bb97e5e5dc28453846e57b508b858e720503a306e7f04c27d32b91\": not found" Feb 13 15:50:23.896655 sshd[4398]: Connection closed by 10.0.0.1 port 33746 Feb 13 15:50:23.898894 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:23.914311 systemd[1]: sshd@28-10.0.0.60:22-10.0.0.1:33746.service: Deactivated successfully. Feb 13 15:50:23.920905 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:50:23.924907 systemd-logind[1478]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:50:23.936022 systemd[1]: Started sshd@29-10.0.0.60:22-10.0.0.1:33756.service - OpenSSH per-connection server daemon (10.0.0.1:33756). Feb 13 15:50:23.937479 systemd-logind[1478]: Removed session 29. Feb 13 15:50:24.011346 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 33756 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:24.012056 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:24.020945 systemd-logind[1478]: New session 30 of user core. Feb 13 15:50:24.031712 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:50:24.342790 kubelet[2718]: I0213 15:50:24.339708 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" path="/var/lib/kubelet/pods/bfa03493-0fcc-4823-a50a-f1211ddf3e96/volumes" Feb 13 15:50:24.342790 kubelet[2718]: I0213 15:50:24.340949 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4048bd4-6051-425b-a63c-aa9843d3cf79" path="/var/lib/kubelet/pods/d4048bd4-6051-425b-a63c-aa9843d3cf79/volumes" Feb 13 15:50:24.742662 sshd[4557]: Connection closed by 10.0.0.1 port 33756 Feb 13 15:50:24.744773 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:24.753023 systemd[1]: sshd@29-10.0.0.60:22-10.0.0.1:33756.service: Deactivated successfully. Feb 13 15:50:24.757085 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:50:24.758617 systemd-logind[1478]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:50:24.759509 kubelet[2718]: I0213 15:50:24.759466 2718 topology_manager.go:215] "Topology Admit Handler" podUID="c6202f35-1e95-4da0-8d2a-891946b1db06" podNamespace="kube-system" podName="cilium-f4txj" Feb 13 15:50:24.759591 kubelet[2718]: E0213 15:50:24.759565 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="clean-cilium-state" Feb 13 15:50:24.759591 kubelet[2718]: E0213 15:50:24.759578 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="apply-sysctl-overwrites" Feb 13 15:50:24.759591 kubelet[2718]: E0213 15:50:24.759586 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="mount-cgroup" Feb 13 15:50:24.759674 kubelet[2718]: E0213 15:50:24.759594 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4048bd4-6051-425b-a63c-aa9843d3cf79" containerName="cilium-operator" Feb 13 15:50:24.759674 kubelet[2718]: E0213 15:50:24.759602 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="mount-bpf-fs" Feb 13 15:50:24.759674 kubelet[2718]: E0213 15:50:24.759610 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="cilium-agent" Feb 13 15:50:24.759674 kubelet[2718]: I0213 15:50:24.759638 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4048bd4-6051-425b-a63c-aa9843d3cf79" containerName="cilium-operator" Feb 13 15:50:24.759674 kubelet[2718]: I0213 15:50:24.759650 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa03493-0fcc-4823-a50a-f1211ddf3e96" containerName="cilium-agent" Feb 13 15:50:24.767248 systemd[1]: Started sshd@30-10.0.0.60:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Feb 13 15:50:24.772706 systemd-logind[1478]: Removed session 30. Feb 13 15:50:24.785294 systemd[1]: Created slice kubepods-burstable-podc6202f35_1e95_4da0_8d2a_891946b1db06.slice - libcontainer container kubepods-burstable-podc6202f35_1e95_4da0_8d2a_891946b1db06.slice. Feb 13 15:50:24.817203 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:24.819721 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:24.824891 systemd-logind[1478]: New session 31 of user core. Feb 13 15:50:24.829694 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 15:50:24.883971 sshd[4571]: Connection closed by 10.0.0.1 port 33768 Feb 13 15:50:24.884696 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:24.884859 kubelet[2718]: I0213 15:50:24.884808 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-hostproc\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.884859 kubelet[2718]: I0213 15:50:24.884840 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-cni-path\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.884859 kubelet[2718]: I0213 15:50:24.884856 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-lib-modules\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.884874 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6202f35-1e95-4da0-8d2a-891946b1db06-cilium-ipsec-secrets\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.884893 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6202f35-1e95-4da0-8d2a-891946b1db06-clustermesh-secrets\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.884909 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kf94\" (UniqueName: \"kubernetes.io/projected/c6202f35-1e95-4da0-8d2a-891946b1db06-kube-api-access-5kf94\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.884996 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-cilium-run\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.885033 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-bpf-maps\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885111 kubelet[2718]: I0213 15:50:24.885051 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-cilium-cgroup\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885068 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-xtables-lock\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885085 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6202f35-1e95-4da0-8d2a-891946b1db06-hubble-tls\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885103 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6202f35-1e95-4da0-8d2a-891946b1db06-cilium-config-path\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885119 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-host-proc-sys-net\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885146 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-etc-cni-netd\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.885256 kubelet[2718]: I0213 15:50:24.885162 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6202f35-1e95-4da0-8d2a-891946b1db06-host-proc-sys-kernel\") pod \"cilium-f4txj\" (UID: \"c6202f35-1e95-4da0-8d2a-891946b1db06\") " pod="kube-system/cilium-f4txj" Feb 13 15:50:24.892617 systemd[1]: sshd@30-10.0.0.60:22-10.0.0.1:33768.service: Deactivated successfully. Feb 13 15:50:24.894520 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 15:50:24.896236 systemd-logind[1478]: Session 31 logged out. Waiting for processes to exit. Feb 13 15:50:24.903795 systemd[1]: Started sshd@31-10.0.0.60:22-10.0.0.1:33782.service - OpenSSH per-connection server daemon (10.0.0.1:33782). Feb 13 15:50:24.904746 systemd-logind[1478]: Removed session 31. Feb 13 15:50:24.937330 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 33782 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:24.939072 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:24.943669 systemd-logind[1478]: New session 32 of user core. Feb 13 15:50:24.962664 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 15:50:25.028786 kubelet[2718]: I0213 15:50:25.028736 2718 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:50:25Z","lastTransitionTime":"2025-02-13T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:50:25.090039 kubelet[2718]: E0213 15:50:25.089992 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:25.090598 containerd[1496]: time="2025-02-13T15:50:25.090556738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4txj,Uid:c6202f35-1e95-4da0-8d2a-891946b1db06,Namespace:kube-system,Attempt:0,}" Feb 13 15:50:25.117966 containerd[1496]: time="2025-02-13T15:50:25.117742806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:50:25.117966 containerd[1496]: time="2025-02-13T15:50:25.117813450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:50:25.117966 containerd[1496]: time="2025-02-13T15:50:25.117824781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:50:25.117966 containerd[1496]: time="2025-02-13T15:50:25.117912598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:50:25.140774 systemd[1]: Started cri-containerd-77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78.scope - libcontainer container 77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78. Feb 13 15:50:25.164832 containerd[1496]: time="2025-02-13T15:50:25.164764365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4txj,Uid:c6202f35-1e95-4da0-8d2a-891946b1db06,Namespace:kube-system,Attempt:0,} returns sandbox id \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\"" Feb 13 15:50:25.165648 kubelet[2718]: E0213 15:50:25.165599 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:25.168323 containerd[1496]: time="2025-02-13T15:50:25.168277522Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:50:25.182421 containerd[1496]: time="2025-02-13T15:50:25.182365278Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7\"" Feb 13 15:50:25.182878 containerd[1496]: time="2025-02-13T15:50:25.182846218Z" level=info msg="StartContainer for \"da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7\"" Feb 13 15:50:25.208677 systemd[1]: Started cri-containerd-da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7.scope - libcontainer container da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7. Feb 13 15:50:25.239353 containerd[1496]: time="2025-02-13T15:50:25.239285677Z" level=info msg="StartContainer for \"da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7\" returns successfully" Feb 13 15:50:25.250391 systemd[1]: cri-containerd-da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7.scope: Deactivated successfully. Feb 13 15:50:25.292421 containerd[1496]: time="2025-02-13T15:50:25.292193663Z" level=info msg="shim disconnected" id=da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7 namespace=k8s.io Feb 13 15:50:25.292421 containerd[1496]: time="2025-02-13T15:50:25.292301938Z" level=warning msg="cleaning up after shim disconnected" id=da0d24b6e6785bfe93f53e39baa633d9ce2636f07e6b9247792ec3f632c005d7 namespace=k8s.io Feb 13 15:50:25.292421 containerd[1496]: time="2025-02-13T15:50:25.292314382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:26.089386 kubelet[2718]: E0213 15:50:26.089360 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:26.090924 containerd[1496]: time="2025-02-13T15:50:26.090882783Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:50:26.488858 containerd[1496]: time="2025-02-13T15:50:26.488740002Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe\"" Feb 13 15:50:26.489321 containerd[1496]: time="2025-02-13T15:50:26.489278941Z" level=info msg="StartContainer for \"ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe\"" Feb 13 15:50:26.517675 systemd[1]: Started cri-containerd-ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe.scope - libcontainer container ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe. Feb 13 15:50:26.550472 systemd[1]: cri-containerd-ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe.scope: Deactivated successfully. Feb 13 15:50:26.588377 containerd[1496]: time="2025-02-13T15:50:26.588325705Z" level=info msg="StartContainer for \"ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe\" returns successfully" Feb 13 15:50:26.692750 containerd[1496]: time="2025-02-13T15:50:26.692680361Z" level=info msg="shim disconnected" id=ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe namespace=k8s.io Feb 13 15:50:26.692750 containerd[1496]: time="2025-02-13T15:50:26.692746607Z" level=warning msg="cleaning up after shim disconnected" id=ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe namespace=k8s.io Feb 13 15:50:26.692750 containerd[1496]: time="2025-02-13T15:50:26.692758760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:26.990685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef0a784fc786a59a0fcd31860e8c96001c4930961ac9a3b05f17917bbdb73ebe-rootfs.mount: Deactivated successfully. Feb 13 15:50:27.092597 kubelet[2718]: E0213 15:50:27.092557 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:27.097100 containerd[1496]: time="2025-02-13T15:50:27.097049279Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:50:27.355235 containerd[1496]: time="2025-02-13T15:50:27.355177421Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e\"" Feb 13 15:50:27.355871 containerd[1496]: time="2025-02-13T15:50:27.355831449Z" level=info msg="StartContainer for \"343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e\"" Feb 13 15:50:27.389722 systemd[1]: Started cri-containerd-343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e.scope - libcontainer container 343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e. Feb 13 15:50:27.408222 kubelet[2718]: E0213 15:50:27.408172 2718 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:50:27.421459 containerd[1496]: time="2025-02-13T15:50:27.421402726Z" level=info msg="StartContainer for \"343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e\" returns successfully" Feb 13 15:50:27.423167 systemd[1]: cri-containerd-343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e.scope: Deactivated successfully. Feb 13 15:50:27.451763 containerd[1496]: time="2025-02-13T15:50:27.451688069Z" level=info msg="shim disconnected" id=343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e namespace=k8s.io Feb 13 15:50:27.451763 containerd[1496]: time="2025-02-13T15:50:27.451760456Z" level=warning msg="cleaning up after shim disconnected" id=343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e namespace=k8s.io Feb 13 15:50:27.451763 containerd[1496]: time="2025-02-13T15:50:27.451771106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:27.991359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-343b7d64d074ccb176450a89e8fa0702ceec94e64898ccc9eb6ea58f974e919e-rootfs.mount: Deactivated successfully. Feb 13 15:50:28.096331 kubelet[2718]: E0213 15:50:28.096300 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:28.099241 containerd[1496]: time="2025-02-13T15:50:28.099072661Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:50:28.123778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213285474.mount: Deactivated successfully. Feb 13 15:50:28.123951 containerd[1496]: time="2025-02-13T15:50:28.123827888Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4\"" Feb 13 15:50:28.124743 containerd[1496]: time="2025-02-13T15:50:28.124604417Z" level=info msg="StartContainer for \"973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4\"" Feb 13 15:50:28.167783 systemd[1]: Started cri-containerd-973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4.scope - libcontainer container 973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4. Feb 13 15:50:28.192578 systemd[1]: cri-containerd-973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4.scope: Deactivated successfully. Feb 13 15:50:28.194199 containerd[1496]: time="2025-02-13T15:50:28.194162944Z" level=info msg="StartContainer for \"973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4\" returns successfully" Feb 13 15:50:28.217342 containerd[1496]: time="2025-02-13T15:50:28.217271496Z" level=info msg="shim disconnected" id=973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4 namespace=k8s.io Feb 13 15:50:28.217342 containerd[1496]: time="2025-02-13T15:50:28.217335827Z" level=warning msg="cleaning up after shim disconnected" id=973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4 namespace=k8s.io Feb 13 15:50:28.217342 containerd[1496]: time="2025-02-13T15:50:28.217346027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:50:28.990907 systemd[1]: run-containerd-runc-k8s.io-973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4-runc.DIeQJG.mount: Deactivated successfully. Feb 13 15:50:28.991025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-973b1e03a4333f3571a043ccf89a193c90a949777c23c8a15439ca749c1df1f4-rootfs.mount: Deactivated successfully. Feb 13 15:50:29.099870 kubelet[2718]: E0213 15:50:29.099839 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:29.102203 containerd[1496]: time="2025-02-13T15:50:29.102147312Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:50:29.120905 containerd[1496]: time="2025-02-13T15:50:29.120862430Z" level=info msg="CreateContainer within sandbox \"77602b7133e5e19f56588d92fc9764304e06f5563bd82fff79a0706106e6cb78\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1\"" Feb 13 15:50:29.121399 containerd[1496]: time="2025-02-13T15:50:29.121379097Z" level=info msg="StartContainer for \"c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1\"" Feb 13 15:50:29.160670 systemd[1]: Started cri-containerd-c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1.scope - libcontainer container c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1. Feb 13 15:50:29.188946 containerd[1496]: time="2025-02-13T15:50:29.188901685Z" level=info msg="StartContainer for \"c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1\" returns successfully" Feb 13 15:50:29.620577 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:50:29.654568 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Feb 13 15:50:29.678579 kernel: DRBG: Continuing without Jitter RNG Feb 13 15:50:30.104210 kubelet[2718]: E0213 15:50:30.104179 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:30.117388 kubelet[2718]: I0213 15:50:30.117325 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f4txj" podStartSLOduration=6.117308917 podStartE2EDuration="6.117308917s" podCreationTimestamp="2025-02-13 15:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:50:30.116604326 +0000 UTC m=+107.870936545" watchObservedRunningTime="2025-02-13 15:50:30.117308917 +0000 UTC m=+107.871641136" Feb 13 15:50:31.105828 kubelet[2718]: E0213 15:50:31.105795 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:31.456263 systemd[1]: run-containerd-runc-k8s.io-c30604f1288a52d44cc02eea1c91c52e5b9ca8a6db4f79adddd0a7d08d62d6f1-runc.jwocLZ.mount: Deactivated successfully. Feb 13 15:50:32.669159 systemd-networkd[1410]: lxc_health: Link UP Feb 13 15:50:32.675033 systemd-networkd[1410]: lxc_health: Gained carrier Feb 13 15:50:33.092693 kubelet[2718]: E0213 15:50:33.092302 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:33.110305 kubelet[2718]: E0213 15:50:33.109869 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:34.111283 kubelet[2718]: E0213 15:50:34.111245 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:50:34.445796 systemd-networkd[1410]: lxc_health: Gained IPv6LL Feb 13 15:50:39.952096 sshd[4579]: Connection closed by 10.0.0.1 port 33782 Feb 13 15:50:39.952887 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:39.957510 systemd[1]: sshd@31-10.0.0.60:22-10.0.0.1:33782.service: Deactivated successfully. Feb 13 15:50:39.960138 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 15:50:39.960906 systemd-logind[1478]: Session 32 logged out. Waiting for processes to exit. Feb 13 15:50:39.961839 systemd-logind[1478]: Removed session 32. Feb 13 15:50:40.336248 kubelet[2718]: E0213 15:50:40.336178 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"