Jan 29 16:13:28.904253 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:13:28.904303 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:13:28.904319 kernel: BIOS-provided physical RAM map: Jan 29 16:13:28.904328 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:13:28.904337 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:13:28.904345 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:13:28.904356 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 16:13:28.904365 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 16:13:28.904374 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:13:28.904386 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:13:28.904395 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:13:28.904404 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:13:28.904416 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:13:28.904425 kernel: NX (Execute Disable) protection: active Jan 29 16:13:28.904436 kernel: APIC: Static calls initialized Jan 29 16:13:28.904453 kernel: SMBIOS 2.8 present. Jan 29 16:13:28.904462 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 16:13:28.904472 kernel: Hypervisor detected: KVM Jan 29 16:13:28.904481 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:13:28.904491 kernel: kvm-clock: using sched offset of 3364892695 cycles Jan 29 16:13:28.904500 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:13:28.904510 kernel: tsc: Detected 2794.748 MHz processor Jan 29 16:13:28.904520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:13:28.904531 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:13:28.904540 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 16:13:28.904554 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:13:28.904564 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:13:28.904574 kernel: Using GB pages for direct mapping Jan 29 16:13:28.904583 kernel: ACPI: Early table checksum verification disabled Jan 29 16:13:28.904593 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 16:13:28.904612 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904630 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904652 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904670 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 16:13:28.904695 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904717 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904739 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904756 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:13:28.904778 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 16:13:28.904796 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 16:13:28.904821 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 16:13:28.904834 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 16:13:28.904844 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 16:13:28.904863 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 16:13:28.904873 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 16:13:28.904886 kernel: No NUMA configuration found Jan 29 16:13:28.904897 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 16:13:28.904907 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 16:13:28.904921 kernel: Zone ranges: Jan 29 16:13:28.904931 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:13:28.904941 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 16:13:28.904951 kernel: Normal empty Jan 29 16:13:28.904961 kernel: Movable zone start for each node Jan 29 16:13:28.904971 kernel: Early memory node ranges Jan 29 16:13:28.904981 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:13:28.904991 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 16:13:28.905001 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 16:13:28.905015 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:13:28.905028 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:13:28.905038 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:13:28.905048 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:13:28.905059 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:13:28.905069 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:13:28.905079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:13:28.905089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:13:28.905099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:13:28.905110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:13:28.905134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:13:28.905146 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:13:28.905166 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:13:28.905177 kernel: TSC deadline timer available Jan 29 16:13:28.905187 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:13:28.905197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:13:28.905207 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:13:28.905221 kernel: kvm-guest: setup PV sched yield Jan 29 16:13:28.905231 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:13:28.905246 kernel: Booting paravirtualized kernel on KVM Jan 29 16:13:28.905257 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:13:28.905267 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:13:28.905292 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:13:28.905303 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:13:28.905313 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:13:28.905322 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:13:28.905332 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:13:28.905344 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:13:28.905360 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:13:28.905370 kernel: random: crng init done Jan 29 16:13:28.905380 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:13:28.905390 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:13:28.905400 kernel: Fallback order for Node 0: 0 Jan 29 16:13:28.905410 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 16:13:28.905420 kernel: Policy zone: DMA32 Jan 29 16:13:28.905430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:13:28.905444 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved) Jan 29 16:13:28.905455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:13:28.905465 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:13:28.905475 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:13:28.905485 kernel: Dynamic Preempt: voluntary Jan 29 16:13:28.905495 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:13:28.905506 kernel: rcu: RCU event tracing is enabled. Jan 29 16:13:28.905516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:13:28.905527 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:13:28.905540 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:13:28.905550 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:13:28.905561 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:13:28.905575 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:13:28.905585 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:13:28.905595 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:13:28.905605 kernel: Console: colour VGA+ 80x25 Jan 29 16:13:28.905615 kernel: printk: console [ttyS0] enabled Jan 29 16:13:28.905625 kernel: ACPI: Core revision 20230628 Jan 29 16:13:28.905639 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:13:28.905649 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:13:28.905659 kernel: x2apic enabled Jan 29 16:13:28.905669 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:13:28.905679 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:13:28.905690 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:13:28.905700 kernel: kvm-guest: setup PV IPIs Jan 29 16:13:28.905723 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:13:28.905734 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:13:28.905744 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 16:13:28.905755 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:13:28.905765 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:13:28.905780 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:13:28.905790 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:13:28.905801 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:13:28.905811 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:13:28.905822 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:13:28.905836 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:13:28.905857 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:13:28.905869 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:13:28.905879 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:13:28.905890 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:13:28.905901 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:13:28.905912 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:13:28.905923 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:13:28.905937 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:13:28.905948 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:13:28.905959 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:13:28.905969 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:13:28.905980 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:13:28.905990 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:13:28.906001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:13:28.906011 kernel: landlock: Up and running. Jan 29 16:13:28.906022 kernel: SELinux: Initializing. Jan 29 16:13:28.906036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:13:28.906046 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:13:28.906057 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:13:28.906068 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:13:28.906079 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:13:28.906089 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:13:28.906100 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:13:28.906114 kernel: ... version: 0 Jan 29 16:13:28.906128 kernel: ... bit width: 48 Jan 29 16:13:28.906138 kernel: ... generic registers: 6 Jan 29 16:13:28.906148 kernel: ... value mask: 0000ffffffffffff Jan 29 16:13:28.906159 kernel: ... max period: 00007fffffffffff Jan 29 16:13:28.906169 kernel: ... fixed-purpose events: 0 Jan 29 16:13:28.906180 kernel: ... event mask: 000000000000003f Jan 29 16:13:28.906190 kernel: signal: max sigframe size: 1776 Jan 29 16:13:28.906200 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:13:28.906211 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:13:28.906222 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:13:28.906236 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:13:28.906246 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:13:28.906256 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:13:28.906267 kernel: smpboot: Max logical packages: 1 Jan 29 16:13:28.906293 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 16:13:28.906306 kernel: devtmpfs: initialized Jan 29 16:13:28.906318 kernel: x86/mm: Memory block size: 128MB Jan 29 16:13:28.906329 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:13:28.906340 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:13:28.906355 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:13:28.906365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:13:28.906376 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:13:28.906387 kernel: audit: type=2000 audit(1738167207.998:1): state=initialized audit_enabled=0 res=1 Jan 29 16:13:28.906397 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:13:28.906407 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:13:28.906418 kernel: cpuidle: using governor menu Jan 29 16:13:28.906428 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:13:28.906439 kernel: dca service started, version 1.12.1 Jan 29 16:13:28.906453 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:13:28.906463 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:13:28.906474 kernel: PCI: Using configuration type 1 for base access Jan 29 16:13:28.906485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:13:28.906495 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:13:28.906506 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:13:28.906517 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:13:28.906527 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:13:28.906538 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:13:28.906551 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:13:28.906562 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:13:28.906573 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:13:28.906583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:13:28.906593 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:13:28.906604 kernel: ACPI: Interpreter enabled Jan 29 16:13:28.906614 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:13:28.906625 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:13:28.906635 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:13:28.906649 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:13:28.906660 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:13:28.906670 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:13:28.906995 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:13:28.907174 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:13:28.907379 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:13:28.907395 kernel: PCI host bridge to bus 0000:00 Jan 29 16:13:28.907577 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:13:28.907729 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:13:28.907889 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:13:28.908131 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 16:13:28.908312 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:13:28.908470 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:13:28.908623 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:13:28.908842 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:13:28.909047 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:13:28.909217 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 16:13:28.909420 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 16:13:28.909585 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 16:13:28.909750 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:13:28.909951 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:13:28.910129 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:13:28.910319 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 16:13:28.910494 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 16:13:28.910697 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:13:28.910876 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:13:28.911045 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 16:13:28.911217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 16:13:28.911431 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:13:28.911604 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 16:13:28.911773 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 16:13:28.911953 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 16:13:28.912120 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 16:13:28.912333 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:13:28.912512 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:13:28.912693 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:13:28.912873 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 16:13:28.913041 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 16:13:28.913414 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:13:28.913587 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:13:28.913603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:13:28.913621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:13:28.913631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:13:28.913642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:13:28.913653 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:13:28.913663 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:13:28.913678 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:13:28.913689 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:13:28.913700 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:13:28.913710 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:13:28.913725 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:13:28.913735 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:13:28.913746 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:13:28.913757 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:13:28.913767 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:13:28.913778 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:13:28.913788 kernel: iommu: Default domain type: Translated Jan 29 16:13:28.913799 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:13:28.913810 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:13:28.913824 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:13:28.913834 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:13:28.913845 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 16:13:28.914025 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:13:28.914191 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:13:28.914444 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:13:28.914461 kernel: vgaarb: loaded Jan 29 16:13:28.914473 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:13:28.914489 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:13:28.914500 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:13:28.914511 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:13:28.914522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:13:28.914532 kernel: pnp: PnP ACPI init Jan 29 16:13:28.914808 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:13:28.914836 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:13:28.914858 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:13:28.914875 kernel: NET: Registered PF_INET protocol family Jan 29 16:13:28.914886 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:13:28.914897 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:13:28.914908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:13:28.914919 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:13:28.914930 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:13:28.914941 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:13:28.914951 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:13:28.914962 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:13:28.914976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:13:28.914987 kernel: NET: Registered PF_XDP protocol family Jan 29 16:13:28.915211 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:13:28.915400 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:13:28.915805 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:13:28.915970 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 16:13:28.916142 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:13:28.916317 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:13:28.916340 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:13:28.916351 kernel: Initialise system trusted keyrings Jan 29 16:13:28.916362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:13:28.916372 kernel: Key type asymmetric registered Jan 29 16:13:28.916383 kernel: Asymmetric key parser 'x509' registered Jan 29 16:13:28.916394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:13:28.916404 kernel: io scheduler mq-deadline registered Jan 29 16:13:28.916415 kernel: io scheduler kyber registered Jan 29 16:13:28.916426 kernel: io scheduler bfq registered Jan 29 16:13:28.916440 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:13:28.916451 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:13:28.916462 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:13:28.916473 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:13:28.916483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:13:28.916494 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:13:28.916505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:13:28.916516 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:13:28.916526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:13:28.916729 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:13:28.916746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:13:28.916908 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:13:28.917078 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:13:28 UTC (1738167208) Jan 29 16:13:28.917237 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:13:28.917252 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:13:28.917262 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:13:28.917326 kernel: Segment Routing with IPv6 Jan 29 16:13:28.917345 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:13:28.917356 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:13:28.917367 kernel: Key type dns_resolver registered Jan 29 16:13:28.917377 kernel: IPI shorthand broadcast: enabled Jan 29 16:13:28.917388 kernel: sched_clock: Marking stable (670002308, 106041081)->(836142295, -60098906) Jan 29 16:13:28.917398 kernel: registered taskstats version 1 Jan 29 16:13:28.917409 kernel: Loading compiled-in X.509 certificates Jan 29 16:13:28.917420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:13:28.917430 kernel: Key type .fscrypt registered Jan 29 16:13:28.917459 kernel: Key type fscrypt-provisioning registered Jan 29 16:13:28.917476 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:13:28.917487 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:13:28.917497 kernel: ima: No architecture policies found Jan 29 16:13:28.917508 kernel: clk: Disabling unused clocks Jan 29 16:13:28.917521 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:13:28.917532 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:13:28.917543 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:13:28.917553 kernel: Run /init as init process Jan 29 16:13:28.917568 kernel: with arguments: Jan 29 16:13:28.917579 kernel: /init Jan 29 16:13:28.917589 kernel: with environment: Jan 29 16:13:28.917600 kernel: HOME=/ Jan 29 16:13:28.917610 kernel: TERM=linux Jan 29 16:13:28.917621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:13:28.917632 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:13:28.917647 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:13:28.917662 systemd[1]: Detected virtualization kvm. Jan 29 16:13:28.917674 systemd[1]: Detected architecture x86-64. Jan 29 16:13:28.917684 systemd[1]: Running in initrd. Jan 29 16:13:28.917695 systemd[1]: No hostname configured, using default hostname. Jan 29 16:13:28.917707 systemd[1]: Hostname set to . Jan 29 16:13:28.917718 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:13:28.917729 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:13:28.917741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:13:28.917756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:13:28.917785 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:13:28.917800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:13:28.917812 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:13:28.917825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:13:28.917842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:13:28.917861 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:13:28.917873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:13:28.917884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:13:28.917896 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:13:28.917907 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:13:28.917919 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:13:28.917930 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:13:28.917945 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:13:28.917957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:13:28.917969 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:13:28.917980 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:13:28.917995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:13:28.918007 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:13:28.918019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:13:28.918030 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:13:28.918042 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:13:28.918057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:13:28.918069 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:13:28.918080 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:13:28.918092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:13:28.918104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:13:28.918119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:13:28.918131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:13:28.918142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:13:28.918158 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:13:28.918170 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:13:28.918219 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 16:13:28.918248 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:13:28.918260 systemd-journald[194]: Journal started Jan 29 16:13:28.918303 systemd-journald[194]: Runtime Journal (/run/log/journal/5e5c1239af474efca693005ef5a5e77b) is 6M, max 48.4M, 42.3M free. Jan 29 16:13:28.905264 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:13:28.944533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:13:28.944554 kernel: Bridge firewalling registered Jan 29 16:13:28.936077 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:13:28.950108 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:13:28.950945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:13:28.953380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:13:28.972466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:13:28.973303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:13:28.974421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:13:28.978453 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:13:28.989833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:13:28.990327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:13:28.994078 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:13:28.999482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:13:29.001776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:13:29.014995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:13:29.033933 dracut-cmdline[231]: dracut-dracut-053 Jan 29 16:13:29.037682 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:13:29.052804 systemd-resolved[225]: Positive Trust Anchors: Jan 29 16:13:29.052830 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:13:29.052878 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:13:29.055772 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 29 16:13:29.056997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:13:29.062711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:13:29.142304 kernel: SCSI subsystem initialized Jan 29 16:13:29.151300 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:13:29.162337 kernel: iscsi: registered transport (tcp) Jan 29 16:13:29.182561 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:13:29.182621 kernel: QLogic iSCSI HBA Driver Jan 29 16:13:29.233376 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:13:29.240464 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:13:29.265311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:13:29.265348 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:13:29.267313 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:13:29.311314 kernel: raid6: avx2x4 gen() 26799 MB/s Jan 29 16:13:29.328303 kernel: raid6: avx2x2 gen() 27436 MB/s Jan 29 16:13:29.345377 kernel: raid6: avx2x1 gen() 23695 MB/s Jan 29 16:13:29.345402 kernel: raid6: using algorithm avx2x2 gen() 27436 MB/s Jan 29 16:13:29.363390 kernel: raid6: .... xor() 20022 MB/s, rmw enabled Jan 29 16:13:29.363416 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:13:29.385301 kernel: xor: automatically using best checksumming function avx Jan 29 16:13:29.531320 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:13:29.545410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:13:29.553508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:13:29.568508 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 16:13:29.574160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:13:29.587432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:13:29.621107 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 29 16:13:29.655333 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:13:29.671455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:13:29.763520 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:13:29.769474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:13:29.783681 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:13:29.786737 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:13:29.789671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:13:29.792184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:13:29.798310 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:13:29.810148 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:13:29.810340 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:13:29.810353 kernel: GPT:9289727 != 19775487 Jan 29 16:13:29.810364 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:13:29.810382 kernel: GPT:9289727 != 19775487 Jan 29 16:13:29.810392 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:13:29.810402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:13:29.804429 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:13:29.815937 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:13:29.819851 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:13:29.830250 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:13:29.830303 kernel: AES CTR mode by8 optimization enabled Jan 29 16:13:29.838047 kernel: libata version 3.00 loaded. Jan 29 16:13:29.838531 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:13:29.839067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:13:29.843488 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:13:29.850631 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:13:30.000232 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:13:30.000255 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:13:30.000477 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:13:30.000634 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (475) Jan 29 16:13:30.000646 kernel: scsi host0: ahci Jan 29 16:13:30.000817 kernel: scsi host1: ahci Jan 29 16:13:30.000985 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (462) Jan 29 16:13:30.000997 kernel: scsi host2: ahci Jan 29 16:13:30.001164 kernel: scsi host3: ahci Jan 29 16:13:30.001540 kernel: scsi host4: ahci Jan 29 16:13:30.001788 kernel: scsi host5: ahci Jan 29 16:13:30.001987 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 16:13:30.002000 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 16:13:30.002011 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 16:13:30.002022 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 16:13:30.002033 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 16:13:30.002043 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 16:13:29.844936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:13:29.845112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:13:29.846511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:13:29.979771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:13:29.982137 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:13:30.019559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:13:30.038678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:13:30.062476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:13:30.075598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:13:30.086164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:13:30.087453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:13:30.104490 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:13:30.105438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:13:30.114864 disk-uuid[555]: Primary Header is updated. Jan 29 16:13:30.114864 disk-uuid[555]: Secondary Entries is updated. Jan 29 16:13:30.114864 disk-uuid[555]: Secondary Header is updated. Jan 29 16:13:30.124856 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:13:30.135693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:13:30.142296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:13:30.313409 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:13:30.313500 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:13:30.313531 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:13:30.313544 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:13:30.315307 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:13:30.315336 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:13:30.316303 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:13:30.317777 kernel: ata3.00: applying bridge limits Jan 29 16:13:30.317842 kernel: ata3.00: configured for UDMA/100 Jan 29 16:13:30.318312 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:13:30.372317 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:13:30.390059 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:13:30.390078 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:13:31.143320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:13:31.143458 disk-uuid[560]: The operation has completed successfully. Jan 29 16:13:31.177238 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:13:31.177427 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:13:31.221445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:13:31.228349 sh[593]: Success Jan 29 16:13:31.243303 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:13:31.280535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:13:31.298610 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:13:31.303090 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:13:31.313773 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:13:31.313826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:13:31.313846 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:13:31.315006 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:13:31.316631 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:13:31.321863 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:13:31.323930 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:13:31.334514 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:13:31.336507 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:13:31.350317 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:13:31.350379 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:13:31.352337 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:13:31.355335 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:13:31.367255 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:13:31.369522 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:13:31.379701 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:13:31.387581 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:13:31.445147 ignition[690]: Ignition 2.20.0 Jan 29 16:13:31.445163 ignition[690]: Stage: fetch-offline Jan 29 16:13:31.445212 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:31.445223 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:31.445353 ignition[690]: parsed url from cmdline: "" Jan 29 16:13:31.445358 ignition[690]: no config URL provided Jan 29 16:13:31.445363 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:13:31.445374 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:13:31.445401 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 29 16:13:31.445407 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:13:31.455189 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 29 16:13:31.466582 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:13:31.482499 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:13:31.498545 ignition[690]: parsing config with SHA512: ea7d50292e548e155762d4adef1534fdf55de4d04d88f26cf7c4d4b92c8094051b0ac98bdb65300a713d8aae900526d51d2e6fd929fa8cd5e34f7518df7c7391 Jan 29 16:13:31.503405 unknown[690]: fetched base config from "system" Jan 29 16:13:31.503558 unknown[690]: fetched user config from "qemu" Jan 29 16:13:31.504233 ignition[690]: fetch-offline: fetch-offline passed Jan 29 16:13:31.504389 ignition[690]: Ignition finished successfully Jan 29 16:13:31.509010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:13:31.533041 systemd-networkd[783]: lo: Link UP Jan 29 16:13:31.533050 systemd-networkd[783]: lo: Gained carrier Jan 29 16:13:31.534957 systemd-networkd[783]: Enumeration completed Jan 29 16:13:31.535349 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:13:31.535353 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:13:31.536039 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:13:31.536232 systemd-networkd[783]: eth0: Link UP Jan 29 16:13:31.536236 systemd-networkd[783]: eth0: Gained carrier Jan 29 16:13:31.536243 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:13:31.541784 systemd[1]: Reached target network.target - Network. Jan 29 16:13:31.546841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:13:31.557329 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:13:31.557417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:13:31.573684 ignition[787]: Ignition 2.20.0 Jan 29 16:13:31.573696 ignition[787]: Stage: kargs Jan 29 16:13:31.573881 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:31.573893 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:31.574770 ignition[787]: kargs: kargs passed Jan 29 16:13:31.574823 ignition[787]: Ignition finished successfully Jan 29 16:13:31.578083 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:13:31.591474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:13:31.602537 ignition[796]: Ignition 2.20.0 Jan 29 16:13:31.602548 ignition[796]: Stage: disks Jan 29 16:13:31.602709 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:31.602722 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:31.603565 ignition[796]: disks: disks passed Jan 29 16:13:31.605949 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:13:31.603613 ignition[796]: Ignition finished successfully Jan 29 16:13:31.607371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:13:31.608882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:13:31.611061 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:13:31.612109 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:13:31.613905 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:13:31.625681 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:13:31.636164 systemd-resolved[225]: Detected conflict on linux IN A 10.0.0.32 Jan 29 16:13:31.636181 systemd-resolved[225]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 29 16:13:31.640020 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:13:31.646521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:13:32.330455 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:13:32.452303 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:13:32.452732 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:13:32.453423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:13:32.468365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:13:32.470210 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:13:32.472633 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:13:32.472694 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:13:32.479169 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Jan 29 16:13:32.472725 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:13:32.482824 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:13:32.482845 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:13:32.482855 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:13:32.484298 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:13:32.486034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:13:32.490917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:13:32.492425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:13:32.528295 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:13:32.533958 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:13:32.537909 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:13:32.541824 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:13:32.626984 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:13:32.631413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:13:32.634051 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:13:32.642298 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:13:32.660351 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:13:32.664253 ignition[930]: INFO : Ignition 2.20.0 Jan 29 16:13:32.664253 ignition[930]: INFO : Stage: mount Jan 29 16:13:32.665996 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:32.665996 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:32.665996 ignition[930]: INFO : mount: mount passed Jan 29 16:13:32.665996 ignition[930]: INFO : Ignition finished successfully Jan 29 16:13:32.671346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:13:32.682411 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:13:32.841540 systemd-networkd[783]: eth0: Gained IPv6LL Jan 29 16:13:33.313211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:13:33.322566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:13:33.330295 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Jan 29 16:13:33.332601 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:13:33.332630 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:13:33.332646 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:13:33.336311 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:13:33.338238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:13:33.366402 ignition[961]: INFO : Ignition 2.20.0 Jan 29 16:13:33.366402 ignition[961]: INFO : Stage: files Jan 29 16:13:33.368245 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:33.368245 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:33.371081 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:13:33.372510 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:13:33.372510 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:13:33.377671 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:13:33.379233 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:13:33.381014 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 16:13:33.382152 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:13:33.384556 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:13:33.386558 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:13:33.437962 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:13:33.622925 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:13:33.622925 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:13:33.626925 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:13:33.983374 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:13:34.052088 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:13:34.053971 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:13:34.055672 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:13:34.055672 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:13:34.059061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:13:34.060739 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:13:34.062511 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:13:34.064198 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:13:34.065987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:13:34.067874 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:13:34.069720 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:13:34.071554 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:13:34.074467 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:13:34.077154 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:13:34.079570 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:13:34.362236 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:13:34.848230 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:13:34.848230 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:13:34.852241 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:13:34.877362 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:13:34.881967 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:13:34.883592 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:13:34.883592 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:13:34.883592 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:13:34.883592 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:13:34.883592 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:13:34.883592 ignition[961]: INFO : files: files passed Jan 29 16:13:34.883592 ignition[961]: INFO : Ignition finished successfully Jan 29 16:13:34.885825 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:13:34.895578 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:13:34.896928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:13:34.903786 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:13:34.904033 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:13:34.912241 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:13:34.915672 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:13:34.915672 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:13:34.919194 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:13:34.919203 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:13:34.921071 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:13:34.931441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:13:34.956712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:13:34.956844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:13:34.959143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:13:34.961208 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:13:34.963301 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:13:34.964103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:13:34.982031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:13:34.988411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:13:35.001201 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:13:35.003696 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:13:35.005023 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:13:35.007050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:13:35.007200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:13:35.009547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:13:35.011091 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:13:35.013173 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:13:35.015309 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:13:35.017342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:13:35.019594 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:13:35.021903 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:13:35.024224 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:13:35.026301 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:13:35.028671 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:13:35.030340 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:13:35.030498 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:13:35.032789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:13:35.034219 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:13:35.036315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:13:35.036446 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:13:35.038567 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:13:35.038695 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:13:35.041117 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:13:35.041235 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:13:35.043079 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:13:35.044831 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:13:35.048363 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:13:35.050542 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:13:35.052541 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:13:35.054365 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:13:35.054469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:13:35.056425 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:13:35.056515 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:13:35.058901 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:13:35.059020 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:13:35.060986 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:13:35.061096 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:13:35.073411 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:13:35.076154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:13:35.077100 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:13:35.077240 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:13:35.079289 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:13:35.079398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:13:35.085983 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:13:35.086103 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:13:35.092854 ignition[1017]: INFO : Ignition 2.20.0 Jan 29 16:13:35.092854 ignition[1017]: INFO : Stage: umount Jan 29 16:13:35.094677 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:13:35.094677 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:13:35.094677 ignition[1017]: INFO : umount: umount passed Jan 29 16:13:35.094677 ignition[1017]: INFO : Ignition finished successfully Jan 29 16:13:35.096266 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:13:35.096414 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:13:35.098219 systemd[1]: Stopped target network.target - Network. Jan 29 16:13:35.099758 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:13:35.099816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:13:35.101746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:13:35.101798 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:13:35.103766 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:13:35.103814 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:13:35.105976 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:13:35.106027 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:13:35.108078 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:13:35.110199 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:13:35.113271 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:13:35.118443 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:13:35.118573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:13:35.123522 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:13:35.123772 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:13:35.123898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:13:35.126939 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:13:35.127679 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:13:35.127764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:13:35.139489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:13:35.141591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:13:35.141688 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:13:35.144180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:13:35.144233 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:13:35.146709 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:13:35.146761 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:13:35.151406 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:13:35.151459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:13:35.153725 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:13:35.157195 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:13:35.157271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:13:35.166834 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:13:35.167884 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:13:35.172125 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:13:35.173196 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:13:35.176063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:13:35.176122 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:13:35.179314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:13:35.179361 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:13:35.182382 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:13:35.183346 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:13:35.185531 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:13:35.185589 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:13:35.188643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:13:35.189650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:13:35.201443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:13:35.203726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:13:35.203786 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:13:35.207412 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:13:35.208535 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:13:35.211268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:13:35.211334 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:13:35.214821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:13:35.214878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:13:35.219098 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:13:35.220512 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:13:35.222207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:13:35.223352 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:13:35.322819 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:13:35.324084 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:13:35.326915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:13:35.329223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:13:35.330223 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:13:35.352550 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:13:35.359887 systemd[1]: Switching root. Jan 29 16:13:35.389800 systemd-journald[194]: Journal stopped Jan 29 16:13:37.047096 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 16:13:37.047220 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:13:37.047241 kernel: SELinux: policy capability open_perms=1 Jan 29 16:13:37.047257 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:13:37.047272 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:13:37.047305 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:13:37.047322 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:13:37.047338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:13:37.047353 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:13:37.047377 kernel: audit: type=1403 audit(1738167216.155:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:13:37.047411 systemd[1]: Successfully loaded SELinux policy in 45.546ms. Jan 29 16:13:37.047442 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.090ms. Jan 29 16:13:37.047461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:13:37.047478 systemd[1]: Detected virtualization kvm. Jan 29 16:13:37.047494 systemd[1]: Detected architecture x86-64. Jan 29 16:13:37.047511 systemd[1]: Detected first boot. Jan 29 16:13:37.047527 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:13:37.047544 zram_generator::config[1064]: No configuration found. Jan 29 16:13:37.047574 kernel: Guest personality initialized and is inactive Jan 29 16:13:37.047590 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:13:37.047609 kernel: Initialized host personality Jan 29 16:13:37.047624 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:13:37.047651 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:13:37.047670 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:13:37.047686 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:13:37.047703 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:13:37.047719 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:13:37.047749 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:13:37.047767 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:13:37.047784 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:13:37.047800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:13:37.047817 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:13:37.047834 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:13:37.047850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:13:37.047867 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:13:37.047895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:13:37.047912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:13:37.047929 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:13:37.047945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:13:37.047962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:13:37.047981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:13:37.048000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:13:37.048016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:13:37.048050 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:13:37.048066 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:13:37.048082 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:13:37.048099 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:13:37.048115 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:13:37.048132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:13:37.048149 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:13:37.048165 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:13:37.048182 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:13:37.048210 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:13:37.048227 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:13:37.048244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:13:37.048261 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:13:37.048292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:13:37.048309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:13:37.048326 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:13:37.048342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:13:37.048358 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:13:37.048387 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:37.048406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:13:37.048423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:13:37.048439 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:13:37.048457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:13:37.048473 systemd[1]: Reached target machines.target - Containers. Jan 29 16:13:37.048490 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:13:37.048506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:13:37.048534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:13:37.048552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:13:37.048568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:13:37.048584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:13:37.048601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:13:37.048617 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:13:37.048643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:13:37.048667 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:13:37.048684 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:13:37.048711 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:13:37.048728 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:13:37.048745 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:13:37.048762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:13:37.048778 kernel: fuse: init (API version 7.39) Jan 29 16:13:37.048796 kernel: loop: module loaded Jan 29 16:13:37.048812 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:13:37.048829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:13:37.048846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:13:37.048874 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:13:37.048890 kernel: ACPI: bus type drm_connector registered Jan 29 16:13:37.048906 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:13:37.048922 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:13:37.048938 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:13:37.048955 systemd[1]: Stopped verity-setup.service. Jan 29 16:13:37.048983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:37.049000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:13:37.049017 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:13:37.049033 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:13:37.049077 systemd-journald[1133]: Collecting audit messages is disabled. Jan 29 16:13:37.049120 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:13:37.049138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:13:37.049156 systemd-journald[1133]: Journal started Jan 29 16:13:37.049196 systemd-journald[1133]: Runtime Journal (/run/log/journal/5e5c1239af474efca693005ef5a5e77b) is 6M, max 48.4M, 42.3M free. Jan 29 16:13:36.772656 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:13:36.783348 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:13:36.783849 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:13:37.053027 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:13:37.053816 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:13:37.055489 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:13:37.057351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:13:37.059197 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:13:37.059523 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:13:37.061608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:13:37.061934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:13:37.064180 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:13:37.064484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:13:37.066123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:13:37.066421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:13:37.068440 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:13:37.068729 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:13:37.070900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:13:37.071489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:13:37.073266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:13:37.074957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:13:37.076662 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:13:37.078369 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:13:37.096055 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:13:37.105517 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:13:37.108880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:13:37.111032 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:13:37.111085 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:13:37.114057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:13:37.117431 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:13:37.120416 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:13:37.121932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:13:37.126110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:13:37.130383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:13:37.132461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:13:37.143859 systemd-journald[1133]: Time spent on flushing to /var/log/journal/5e5c1239af474efca693005ef5a5e77b is 39.616ms for 969 entries. Jan 29 16:13:37.143859 systemd-journald[1133]: System Journal (/var/log/journal/5e5c1239af474efca693005ef5a5e77b) is 8M, max 195.6M, 187.6M free. Jan 29 16:13:37.234235 systemd-journald[1133]: Received client request to flush runtime journal. Jan 29 16:13:37.234321 kernel: loop0: detected capacity change from 0 to 138176 Jan 29 16:13:37.140491 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:13:37.142214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:13:37.145247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:13:37.151668 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:13:37.170451 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:13:37.175100 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:13:37.212762 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:13:37.214850 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:13:37.215429 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:13:37.229266 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:13:37.237794 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:13:37.248609 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:13:37.260690 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:13:37.264454 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:13:37.268832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:13:37.284320 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:13:37.305212 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:13:37.309746 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:13:37.309768 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:13:37.317907 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:13:37.327917 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:13:37.329681 kernel: loop1: detected capacity change from 0 to 210664 Jan 29 16:13:37.342712 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:13:37.393454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:13:37.450608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:13:37.479342 kernel: loop2: detected capacity change from 0 to 147912 Jan 29 16:13:37.487410 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:13:37.487432 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:13:37.495533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:13:37.536350 kernel: loop3: detected capacity change from 0 to 138176 Jan 29 16:13:37.561311 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 16:13:37.660314 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:13:37.677245 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:13:37.677998 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 16:13:37.682409 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:13:37.682429 systemd[1]: Reloading... Jan 29 16:13:37.789411 zram_generator::config[1238]: No configuration found. Jan 29 16:13:37.964266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:13:38.056845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:13:38.058177 systemd[1]: Reloading finished in 375 ms. Jan 29 16:13:38.127923 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:13:38.139964 systemd[1]: Starting ensure-sysext.service... Jan 29 16:13:38.143904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:13:38.158571 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:13:38.158617 systemd[1]: Reloading... Jan 29 16:13:38.192957 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:13:38.282414 zram_generator::config[1307]: No configuration found. Jan 29 16:13:38.370085 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:13:38.370496 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:13:38.371543 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:13:38.371877 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 29 16:13:38.371989 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 29 16:13:38.375981 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:13:38.375995 systemd-tmpfiles[1277]: Skipping /boot Jan 29 16:13:38.392060 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:13:38.392077 systemd-tmpfiles[1277]: Skipping /boot Jan 29 16:13:38.482404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:13:38.549189 systemd[1]: Reloading finished in 390 ms. Jan 29 16:13:38.569173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:13:38.570773 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:13:38.588064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:13:38.614269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:13:38.619369 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:13:38.624222 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:13:38.630714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:13:38.637194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:13:38.645629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:13:38.652536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.652823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:13:38.662635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:13:38.672118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:13:38.679555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:13:38.685029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:13:38.685230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:13:38.689384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:13:38.690641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.692676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:13:38.692982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:13:38.699006 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:13:38.701882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:13:38.702445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:13:38.715525 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:13:38.715880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:13:38.722380 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jan 29 16:13:38.725198 augenrules[1376]: No rules Jan 29 16:13:38.731805 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:13:38.732223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:13:38.739297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.739969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:13:38.748724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:13:38.755749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:13:38.769223 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:13:38.770835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:13:38.771153 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:13:38.777579 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:13:38.778977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.785618 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:13:38.788689 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:13:38.791024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:13:38.791650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:13:38.794197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:13:38.794499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:13:38.798952 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:13:38.799294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:13:38.803779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:13:38.821026 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:13:38.824322 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:13:38.873292 systemd[1]: Finished ensure-sysext.service. Jan 29 16:13:38.880810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.886329 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1405) Jan 29 16:13:38.889346 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:13:38.893434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:13:38.896672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:13:38.901630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:13:38.909621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:13:38.934632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:13:38.938668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:13:38.938729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:13:38.943534 augenrules[1421]: /sbin/augenrules: No change Jan 29 16:13:38.949672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:13:38.954231 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:13:38.955766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:13:38.955820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:13:38.956885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:13:38.957328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:13:38.963495 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:13:38.964103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:13:38.981927 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:13:38.983636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:13:38.993165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:13:38.993518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:13:39.006645 augenrules[1451]: No rules Jan 29 16:13:39.009087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:13:39.012186 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:13:39.018080 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:13:39.020004 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:13:39.020381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:13:39.027753 systemd-resolved[1356]: Positive Trust Anchors: Jan 29 16:13:39.027778 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:13:39.027822 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:13:39.037941 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jan 29 16:13:39.041264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:13:39.049428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:13:39.252301 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:13:39.254047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:13:39.260835 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:13:39.261180 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:13:39.261412 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:13:39.261529 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:13:39.269454 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:13:39.283976 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:13:39.289303 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:13:39.296253 systemd-networkd[1441]: lo: Link UP Jan 29 16:13:39.296267 systemd-networkd[1441]: lo: Gained carrier Jan 29 16:13:39.298840 systemd-networkd[1441]: Enumeration completed Jan 29 16:13:39.299255 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:13:39.299268 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:13:39.299733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:13:39.300551 systemd-networkd[1441]: eth0: Link UP Jan 29 16:13:39.300561 systemd-networkd[1441]: eth0: Gained carrier Jan 29 16:13:39.300584 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:13:39.301328 systemd[1]: Reached target network.target - Network. Jan 29 16:13:39.308433 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:13:39.311587 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:13:39.318343 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:13:39.366240 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:13:39.368123 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:13:39.376178 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:13:39.397098 systemd-timesyncd[1443]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:13:39.397153 systemd-timesyncd[1443]: Initial clock synchronization to Wed 2025-01-29 16:13:39.605563 UTC. Jan 29 16:13:39.422319 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:13:39.436976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:13:39.479112 kernel: kvm_amd: TSC scaling supported Jan 29 16:13:39.479182 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:13:39.479196 kernel: kvm_amd: Nested Paging enabled Jan 29 16:13:39.479209 kernel: kvm_amd: LBR virtualization supported Jan 29 16:13:39.479701 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:13:39.480802 kernel: kvm_amd: Virtual GIF supported Jan 29 16:13:39.506459 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:13:39.542688 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:13:39.565475 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:13:39.568295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:13:39.575820 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:13:39.682199 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:13:39.683878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:13:39.685075 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:13:39.686335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:13:39.687617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:13:39.689088 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:13:39.690454 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:13:39.691749 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:13:39.692995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:13:39.693039 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:13:39.693950 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:13:39.695959 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:13:39.698992 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:13:39.703324 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:13:39.704800 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:13:39.706063 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:13:39.714032 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:13:39.715485 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:13:39.717939 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:13:39.719603 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:13:39.720766 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:13:39.721749 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:13:39.722732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:13:39.722768 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:13:39.723808 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:13:39.725913 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:13:39.730184 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:13:39.732650 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:13:39.733692 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:13:39.739630 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:13:39.739865 jq[1489]: false Jan 29 16:13:39.737481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:13:39.749415 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:13:39.751788 extend-filesystems[1490]: Found loop3 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found loop4 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found loop5 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found sr0 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda1 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda2 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda3 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found usr Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda4 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda6 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda7 Jan 29 16:13:39.776412 extend-filesystems[1490]: Found vda9 Jan 29 16:13:39.776412 extend-filesystems[1490]: Checking size of /dev/vda9 Jan 29 16:13:39.772699 dbus-daemon[1488]: [system] SELinux support is enabled Jan 29 16:13:39.778382 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:13:39.785428 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:13:39.789907 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:13:39.791851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:13:39.792337 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:13:39.792969 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:13:39.795442 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:13:39.798353 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:13:39.802289 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:13:39.804050 jq[1506]: true Jan 29 16:13:39.807887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:13:39.808175 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:13:39.808545 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:13:39.808819 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:13:39.815764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:13:39.818692 update_engine[1504]: I20250129 16:13:39.816617 1504 main.cc:92] Flatcar Update Engine starting Jan 29 16:13:39.818692 update_engine[1504]: I20250129 16:13:39.818342 1504 update_check_scheduler.cc:74] Next update check in 5m45s Jan 29 16:13:39.816030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:13:39.819511 extend-filesystems[1490]: Resized partition /dev/vda9 Jan 29 16:13:39.832790 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:13:39.835412 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:13:39.839303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1403) Jan 29 16:13:39.840123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:13:39.840155 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:13:39.844108 jq[1513]: true Jan 29 16:13:39.845330 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:13:39.846457 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:13:39.846540 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:13:39.856397 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:13:39.877205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:13:39.879003 systemd-logind[1503]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:13:39.879039 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:13:39.879469 systemd-logind[1503]: New seat seat0. Jan 29 16:13:39.892640 tar[1512]: linux-amd64/helm Jan 29 16:13:39.893293 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:13:39.902620 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:13:39.925303 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:13:39.925303 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:13:39.925303 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:13:39.931990 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Jan 29 16:13:39.928266 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:13:39.928625 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:13:39.943158 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:13:39.944316 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:13:39.946505 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:13:39.948844 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:13:40.271732 containerd[1514]: time="2025-01-29T16:13:40.271558001Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:13:40.322223 containerd[1514]: time="2025-01-29T16:13:40.322150706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.324417 containerd[1514]: time="2025-01-29T16:13:40.324383462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:13:40.324417 containerd[1514]: time="2025-01-29T16:13:40.324408032Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:13:40.324503 containerd[1514]: time="2025-01-29T16:13:40.324422783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:13:40.324676 containerd[1514]: time="2025-01-29T16:13:40.324653241Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:13:40.324701 containerd[1514]: time="2025-01-29T16:13:40.324672885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.324770 containerd[1514]: time="2025-01-29T16:13:40.324754271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:13:40.324800 containerd[1514]: time="2025-01-29T16:13:40.324769281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325060 containerd[1514]: time="2025-01-29T16:13:40.325035593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325060 containerd[1514]: time="2025-01-29T16:13:40.325053799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325112 containerd[1514]: time="2025-01-29T16:13:40.325067142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325112 containerd[1514]: time="2025-01-29T16:13:40.325075983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325206 containerd[1514]: time="2025-01-29T16:13:40.325185609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325493 containerd[1514]: time="2025-01-29T16:13:40.325471474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325666 containerd[1514]: time="2025-01-29T16:13:40.325645894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:13:40.325666 containerd[1514]: time="2025-01-29T16:13:40.325661447Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:13:40.325826 containerd[1514]: time="2025-01-29T16:13:40.325795180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:13:40.325896 containerd[1514]: time="2025-01-29T16:13:40.325880277Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:13:40.328284 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:13:40.355017 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:13:40.406560 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:13:40.417516 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:13:40.417850 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:13:40.424520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:13:40.456458 tar[1512]: linux-amd64/LICENSE Jan 29 16:13:40.456458 tar[1512]: linux-amd64/README.md Jan 29 16:13:40.506096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:13:40.510456 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:13:40.513182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:13:40.514558 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:13:40.516177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:13:40.587092 containerd[1514]: time="2025-01-29T16:13:40.586973516Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:13:40.587092 containerd[1514]: time="2025-01-29T16:13:40.587077847Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:13:40.587193 containerd[1514]: time="2025-01-29T16:13:40.587097204Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:13:40.587193 containerd[1514]: time="2025-01-29T16:13:40.587124755Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:13:40.587193 containerd[1514]: time="2025-01-29T16:13:40.587145644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:13:40.587405 containerd[1514]: time="2025-01-29T16:13:40.587373417Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:13:40.588316 containerd[1514]: time="2025-01-29T16:13:40.588115157Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:13:40.588316 containerd[1514]: time="2025-01-29T16:13:40.588268041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:13:40.588490 containerd[1514]: time="2025-01-29T16:13:40.588402575Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:13:40.588544 containerd[1514]: time="2025-01-29T16:13:40.588470258Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:13:40.588704 containerd[1514]: time="2025-01-29T16:13:40.588670934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588730 containerd[1514]: time="2025-01-29T16:13:40.588701454Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588730 containerd[1514]: time="2025-01-29T16:13:40.588719866Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588742522Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588761346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588775573Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588795948Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588811439Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:13:40.588840 containerd[1514]: time="2025-01-29T16:13:40.588837602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588856919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588870498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588885168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588901728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588916604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588930122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.588952 containerd[1514]: time="2025-01-29T16:13:40.588946426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.588959677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.588978561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.588990013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.589007293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.589022518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.589038904Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.589076632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589090 containerd[1514]: time="2025-01-29T16:13:40.589091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589103360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589169892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589190802Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589201174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589223204Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:13:40.589239 containerd[1514]: time="2025-01-29T16:13:40.589235540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589380 containerd[1514]: time="2025-01-29T16:13:40.589266647Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:13:40.589380 containerd[1514]: time="2025-01-29T16:13:40.589283506Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:13:40.589380 containerd[1514]: time="2025-01-29T16:13:40.589313657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:13:40.589714 containerd[1514]: time="2025-01-29T16:13:40.589651857Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:13:40.589714 containerd[1514]: time="2025-01-29T16:13:40.589708624Z" level=info msg="Connect containerd service" Jan 29 16:13:40.589886 containerd[1514]: time="2025-01-29T16:13:40.589748942Z" level=info msg="using legacy CRI server" Jan 29 16:13:40.589886 containerd[1514]: time="2025-01-29T16:13:40.589758276Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:13:40.589886 containerd[1514]: time="2025-01-29T16:13:40.589861394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:13:40.590572 containerd[1514]: time="2025-01-29T16:13:40.590535801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:13:40.590758 containerd[1514]: time="2025-01-29T16:13:40.590703108Z" level=info msg="Start subscribing containerd event" Jan 29 16:13:40.590802 containerd[1514]: time="2025-01-29T16:13:40.590770904Z" level=info msg="Start recovering state" Jan 29 16:13:40.590877 containerd[1514]: time="2025-01-29T16:13:40.590846790Z" level=info msg="Start event monitor" Jan 29 16:13:40.590914 containerd[1514]: time="2025-01-29T16:13:40.590879255Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:13:40.590935 containerd[1514]: time="2025-01-29T16:13:40.590886862Z" level=info msg="Start snapshots syncer" Jan 29 16:13:40.590935 containerd[1514]: time="2025-01-29T16:13:40.590928619Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:13:40.590973 containerd[1514]: time="2025-01-29T16:13:40.590937604Z" level=info msg="Start streaming server" Jan 29 16:13:40.591022 containerd[1514]: time="2025-01-29T16:13:40.590938457Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:13:40.591180 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:13:40.592416 containerd[1514]: time="2025-01-29T16:13:40.592388950Z" level=info msg="containerd successfully booted in 0.323199s" Jan 29 16:13:41.225962 systemd-networkd[1441]: eth0: Gained IPv6LL Jan 29 16:13:41.229557 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:13:41.231485 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:13:41.244554 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:13:41.247157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:13:41.249489 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:13:41.269335 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:13:41.269627 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:13:41.271464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:13:41.275053 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:13:42.325986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:13:42.328280 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:13:42.329975 systemd[1]: Startup finished in 824ms (kernel) + 7.429s (initrd) + 6.218s (userspace) = 14.472s. Jan 29 16:13:42.330124 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:13:43.004008 kubelet[1600]: E0129 16:13:43.003936 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:13:43.008424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:13:43.008645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:13:43.009096 systemd[1]: kubelet.service: Consumed 1.595s CPU time, 246.2M memory peak. Jan 29 16:13:44.394824 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:13:44.396136 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:41248.service - OpenSSH per-connection server daemon (10.0.0.1:41248). Jan 29 16:13:44.455535 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 41248 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:44.457405 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:44.468781 systemd-logind[1503]: New session 1 of user core. Jan 29 16:13:44.470173 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:13:44.481524 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:13:44.494550 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:13:44.506574 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:13:44.509637 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:13:44.512490 systemd-logind[1503]: New session c1 of user core. Jan 29 16:13:44.665644 systemd[1618]: Queued start job for default target default.target. Jan 29 16:13:44.674664 systemd[1618]: Created slice app.slice - User Application Slice. Jan 29 16:13:44.674692 systemd[1618]: Reached target paths.target - Paths. Jan 29 16:13:44.674737 systemd[1618]: Reached target timers.target - Timers. Jan 29 16:13:44.676389 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:13:44.690583 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:13:44.690714 systemd[1618]: Reached target sockets.target - Sockets. Jan 29 16:13:44.690760 systemd[1618]: Reached target basic.target - Basic System. Jan 29 16:13:44.690806 systemd[1618]: Reached target default.target - Main User Target. Jan 29 16:13:44.690840 systemd[1618]: Startup finished in 170ms. Jan 29 16:13:44.691731 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:13:44.694163 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:13:44.768023 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:41258.service - OpenSSH per-connection server daemon (10.0.0.1:41258). Jan 29 16:13:44.805732 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 41258 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:44.807494 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:44.812116 systemd-logind[1503]: New session 2 of user core. Jan 29 16:13:44.821556 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:13:44.876654 sshd[1631]: Connection closed by 10.0.0.1 port 41258 Jan 29 16:13:44.877059 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 29 16:13:44.895437 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:41258.service: Deactivated successfully. Jan 29 16:13:44.897674 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:13:44.899599 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:13:44.900958 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:41264.service - OpenSSH per-connection server daemon (10.0.0.1:41264). Jan 29 16:13:44.901875 systemd-logind[1503]: Removed session 2. Jan 29 16:13:44.943879 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 41264 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:44.945408 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:44.949926 systemd-logind[1503]: New session 3 of user core. Jan 29 16:13:44.960430 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:13:45.011860 sshd[1639]: Connection closed by 10.0.0.1 port 41264 Jan 29 16:13:45.012374 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jan 29 16:13:45.029549 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:41264.service: Deactivated successfully. Jan 29 16:13:45.031703 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:13:45.033229 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:13:45.041531 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:41268.service - OpenSSH per-connection server daemon (10.0.0.1:41268). Jan 29 16:13:45.042434 systemd-logind[1503]: Removed session 3. Jan 29 16:13:45.078716 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 41268 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:45.080237 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:45.084568 systemd-logind[1503]: New session 4 of user core. Jan 29 16:13:45.098419 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:13:45.153156 sshd[1647]: Connection closed by 10.0.0.1 port 41268 Jan 29 16:13:45.153573 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jan 29 16:13:45.171080 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:41268.service: Deactivated successfully. Jan 29 16:13:45.173181 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:13:45.174681 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:13:45.183570 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:41270.service - OpenSSH per-connection server daemon (10.0.0.1:41270). Jan 29 16:13:45.184379 systemd-logind[1503]: Removed session 4. Jan 29 16:13:45.221121 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 41270 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:45.222403 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:45.227200 systemd-logind[1503]: New session 5 of user core. Jan 29 16:13:45.236415 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:13:45.299309 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:13:45.299789 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:13:45.326785 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 29 16:13:45.328659 sshd[1655]: Connection closed by 10.0.0.1 port 41270 Jan 29 16:13:45.329098 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jan 29 16:13:45.346257 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:41270.service: Deactivated successfully. Jan 29 16:13:45.348501 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:13:45.350415 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:13:45.358691 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:41274.service - OpenSSH per-connection server daemon (10.0.0.1:41274). Jan 29 16:13:45.359771 systemd-logind[1503]: Removed session 5. Jan 29 16:13:45.398642 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 41274 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:45.400868 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:45.405782 systemd-logind[1503]: New session 6 of user core. Jan 29 16:13:45.419433 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:13:45.475319 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:13:45.475670 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:13:45.480572 sudo[1666]: pam_unix(sudo:session): session closed for user root Jan 29 16:13:45.488112 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:13:45.488481 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:13:45.512622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:13:45.545853 augenrules[1688]: No rules Jan 29 16:13:45.547870 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:13:45.548170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:13:45.549397 sudo[1665]: pam_unix(sudo:session): session closed for user root Jan 29 16:13:45.550965 sshd[1664]: Connection closed by 10.0.0.1 port 41274 Jan 29 16:13:45.551309 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jan 29 16:13:45.567433 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:41274.service: Deactivated successfully. Jan 29 16:13:45.569771 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:13:45.571518 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:13:45.572937 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:41284.service - OpenSSH per-connection server daemon (10.0.0.1:41284). Jan 29 16:13:45.573668 systemd-logind[1503]: Removed session 6. Jan 29 16:13:45.634866 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 41284 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:13:45.636392 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:13:45.640903 systemd-logind[1503]: New session 7 of user core. Jan 29 16:13:45.651434 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:13:45.706931 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:13:45.707280 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:13:46.394537 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:13:46.394709 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:13:47.037126 dockerd[1720]: time="2025-01-29T16:13:47.037043783Z" level=info msg="Starting up" Jan 29 16:13:48.449742 dockerd[1720]: time="2025-01-29T16:13:48.449655222Z" level=info msg="Loading containers: start." Jan 29 16:13:48.947329 kernel: Initializing XFRM netlink socket Jan 29 16:13:49.032647 systemd-networkd[1441]: docker0: Link UP Jan 29 16:13:49.122807 dockerd[1720]: time="2025-01-29T16:13:49.122754364Z" level=info msg="Loading containers: done." Jan 29 16:13:49.139675 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1644653200-merged.mount: Deactivated successfully. Jan 29 16:13:49.143976 dockerd[1720]: time="2025-01-29T16:13:49.143936256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:13:49.144051 dockerd[1720]: time="2025-01-29T16:13:49.144038935Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:13:49.144173 dockerd[1720]: time="2025-01-29T16:13:49.144153318Z" level=info msg="Daemon has completed initialization" Jan 29 16:13:49.185999 dockerd[1720]: time="2025-01-29T16:13:49.185917502Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:13:49.186179 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:13:50.411640 containerd[1514]: time="2025-01-29T16:13:50.411565126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:13:51.483376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201660824.mount: Deactivated successfully. Jan 29 16:13:53.259026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:13:53.272499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:13:53.486343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:13:53.507648 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:13:53.587337 kubelet[1985]: E0129 16:13:53.587007 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:13:53.593708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:13:53.593910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:13:53.594390 systemd[1]: kubelet.service: Consumed 261ms CPU time, 99.1M memory peak. Jan 29 16:13:54.080647 containerd[1514]: time="2025-01-29T16:13:54.080488637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:54.081612 containerd[1514]: time="2025-01-29T16:13:54.081544386Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 16:13:54.083031 containerd[1514]: time="2025-01-29T16:13:54.082980679Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:54.085939 containerd[1514]: time="2025-01-29T16:13:54.085913490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:54.087138 containerd[1514]: time="2025-01-29T16:13:54.087078320Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.675452717s" Jan 29 16:13:54.087138 containerd[1514]: time="2025-01-29T16:13:54.087134229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 16:13:54.114604 containerd[1514]: time="2025-01-29T16:13:54.114547866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:13:56.394427 containerd[1514]: time="2025-01-29T16:13:56.394365624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:56.395225 containerd[1514]: time="2025-01-29T16:13:56.395180809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 16:13:56.396773 containerd[1514]: time="2025-01-29T16:13:56.396741324Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:56.399540 containerd[1514]: time="2025-01-29T16:13:56.399509957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:56.401818 containerd[1514]: time="2025-01-29T16:13:56.400835512Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.286243041s" Jan 29 16:13:56.401818 containerd[1514]: time="2025-01-29T16:13:56.400875821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 16:13:56.425774 containerd[1514]: time="2025-01-29T16:13:56.425704415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:13:58.124450 containerd[1514]: time="2025-01-29T16:13:58.124380379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:58.125582 containerd[1514]: time="2025-01-29T16:13:58.125093640Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 16:13:58.126104 containerd[1514]: time="2025-01-29T16:13:58.126083167Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:58.129522 containerd[1514]: time="2025-01-29T16:13:58.129491627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:13:58.130403 containerd[1514]: time="2025-01-29T16:13:58.130379556Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.704614154s" Jan 29 16:13:58.130403 containerd[1514]: time="2025-01-29T16:13:58.130402995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 16:13:58.152887 containerd[1514]: time="2025-01-29T16:13:58.152810889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:13:59.805963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560896739.mount: Deactivated successfully. Jan 29 16:14:00.495737 containerd[1514]: time="2025-01-29T16:14:00.495645992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:00.516497 containerd[1514]: time="2025-01-29T16:14:00.516402960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 16:14:00.524940 containerd[1514]: time="2025-01-29T16:14:00.524879354Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:00.539082 containerd[1514]: time="2025-01-29T16:14:00.539022571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:00.539929 containerd[1514]: time="2025-01-29T16:14:00.539863269Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.387004756s" Jan 29 16:14:00.539929 containerd[1514]: time="2025-01-29T16:14:00.539907692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:14:00.564889 containerd[1514]: time="2025-01-29T16:14:00.564833893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:14:01.764751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714858237.mount: Deactivated successfully. Jan 29 16:14:03.409845 containerd[1514]: time="2025-01-29T16:14:03.409785535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:03.410632 containerd[1514]: time="2025-01-29T16:14:03.410590765Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 16:14:03.411952 containerd[1514]: time="2025-01-29T16:14:03.411921143Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:03.414618 containerd[1514]: time="2025-01-29T16:14:03.414585319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:03.415778 containerd[1514]: time="2025-01-29T16:14:03.415747740Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.85086974s" Jan 29 16:14:03.415778 containerd[1514]: time="2025-01-29T16:14:03.415783871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:14:03.442024 containerd[1514]: time="2025-01-29T16:14:03.441771280Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:14:03.844476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:14:03.853475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:04.005665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:04.011545 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:14:04.108046 kubelet[2096]: E0129 16:14:04.107885 2096 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:14:04.111524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:14:04.111748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:14:04.112175 systemd[1]: kubelet.service: Consumed 247ms CPU time, 98.3M memory peak. Jan 29 16:14:04.188536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238234628.mount: Deactivated successfully. Jan 29 16:14:04.193685 containerd[1514]: time="2025-01-29T16:14:04.193615334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:04.194345 containerd[1514]: time="2025-01-29T16:14:04.194259101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 16:14:04.195481 containerd[1514]: time="2025-01-29T16:14:04.195429730Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:04.197689 containerd[1514]: time="2025-01-29T16:14:04.197639234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:04.198366 containerd[1514]: time="2025-01-29T16:14:04.198327109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 756.519007ms" Jan 29 16:14:04.198366 containerd[1514]: time="2025-01-29T16:14:04.198358631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 16:14:04.230804 containerd[1514]: time="2025-01-29T16:14:04.230750184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:14:04.904071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712175137.mount: Deactivated successfully. Jan 29 16:14:07.738370 containerd[1514]: time="2025-01-29T16:14:07.738302520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:07.739181 containerd[1514]: time="2025-01-29T16:14:07.739020083Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 16:14:07.740340 containerd[1514]: time="2025-01-29T16:14:07.740307557Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:07.743058 containerd[1514]: time="2025-01-29T16:14:07.743021134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:07.744840 containerd[1514]: time="2025-01-29T16:14:07.744790332Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.513994006s" Jan 29 16:14:07.744840 containerd[1514]: time="2025-01-29T16:14:07.744835268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 16:14:10.419321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:10.419503 systemd[1]: kubelet.service: Consumed 247ms CPU time, 98.3M memory peak. Jan 29 16:14:10.428493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:10.450414 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Jan 29 16:14:10.450433 systemd[1]: Reloading... Jan 29 16:14:10.565086 zram_generator::config[2290]: No configuration found. Jan 29 16:14:10.884021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:14:10.998264 systemd[1]: Reloading finished in 547 ms. Jan 29 16:14:11.051199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:11.055263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:11.055763 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:14:11.056028 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:11.056065 systemd[1]: kubelet.service: Consumed 156ms CPU time, 83.6M memory peak. Jan 29 16:14:11.057924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:11.211585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:11.216717 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:14:11.258563 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:14:11.258563 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:14:11.258563 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:14:11.258988 kubelet[2337]: I0129 16:14:11.258613 2337 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:14:11.591003 kubelet[2337]: I0129 16:14:11.590798 2337 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:14:11.591003 kubelet[2337]: I0129 16:14:11.590836 2337 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:14:11.591177 kubelet[2337]: I0129 16:14:11.591098 2337 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:14:11.610606 kubelet[2337]: I0129 16:14:11.610531 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:14:11.611491 kubelet[2337]: E0129 16:14:11.611467 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.695462 kubelet[2337]: I0129 16:14:11.695419 2337 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:14:11.697877 kubelet[2337]: I0129 16:14:11.697829 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:14:11.698058 kubelet[2337]: I0129 16:14:11.697870 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:14:11.698569 kubelet[2337]: I0129 16:14:11.698546 2337 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:14:11.698569 kubelet[2337]: I0129 16:14:11.698565 2337 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:14:11.698729 kubelet[2337]: I0129 16:14:11.698708 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:14:11.699557 kubelet[2337]: I0129 16:14:11.699536 2337 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:14:11.699557 kubelet[2337]: I0129 16:14:11.699554 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:14:11.699621 kubelet[2337]: I0129 16:14:11.699575 2337 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:14:11.699621 kubelet[2337]: I0129 16:14:11.699602 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:14:11.702884 kubelet[2337]: W0129 16:14:11.702822 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.702946 kubelet[2337]: E0129 16:14:11.702899 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.704465 kubelet[2337]: W0129 16:14:11.704142 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.704465 kubelet[2337]: E0129 16:14:11.704192 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.704871 kubelet[2337]: I0129 16:14:11.704849 2337 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:14:11.706651 kubelet[2337]: I0129 16:14:11.706606 2337 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:14:11.706711 kubelet[2337]: W0129 16:14:11.706688 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:14:11.707570 kubelet[2337]: I0129 16:14:11.707551 2337 server.go:1264] "Started kubelet" Jan 29 16:14:11.707782 kubelet[2337]: I0129 16:14:11.707728 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:14:11.707881 kubelet[2337]: I0129 16:14:11.707829 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:14:11.708258 kubelet[2337]: I0129 16:14:11.708233 2337 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:14:11.709827 kubelet[2337]: I0129 16:14:11.709801 2337 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:14:11.711854 kubelet[2337]: I0129 16:14:11.711793 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:14:11.712464 kubelet[2337]: I0129 16:14:11.712434 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:14:11.714856 kubelet[2337]: I0129 16:14:11.714031 2337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:14:11.714856 kubelet[2337]: I0129 16:14:11.714082 2337 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:14:11.714856 kubelet[2337]: E0129 16:14:11.714109 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:14:11.714856 kubelet[2337]: W0129 16:14:11.714385 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.714856 kubelet[2337]: E0129 16:14:11.714431 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.714856 kubelet[2337]: E0129 16:14:11.714585 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Jan 29 16:14:11.714856 kubelet[2337]: E0129 16:14:11.714802 2337 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:14:11.715092 kubelet[2337]: I0129 16:14:11.714936 2337 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:14:11.715092 kubelet[2337]: I0129 16:14:11.715014 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:14:11.715140 kubelet[2337]: E0129 16:14:11.715031 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f35e7fe3e6f89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:14:11.707522953 +0000 UTC m=+0.486756088,LastTimestamp:2025-01-29 16:14:11.707522953 +0000 UTC m=+0.486756088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:14:11.716252 kubelet[2337]: I0129 16:14:11.716231 2337 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:14:11.730539 kubelet[2337]: I0129 16:14:11.730386 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:14:11.731802 kubelet[2337]: I0129 16:14:11.731778 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:14:11.731843 kubelet[2337]: I0129 16:14:11.731822 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:14:11.732130 kubelet[2337]: I0129 16:14:11.732000 2337 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:14:11.732130 kubelet[2337]: E0129 16:14:11.732042 2337 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:14:11.732676 kubelet[2337]: W0129 16:14:11.732653 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.732757 kubelet[2337]: E0129 16:14:11.732745 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:11.736316 kubelet[2337]: I0129 16:14:11.736292 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:14:11.736316 kubelet[2337]: I0129 16:14:11.736313 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:14:11.736406 kubelet[2337]: I0129 16:14:11.736335 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:14:11.816173 kubelet[2337]: I0129 16:14:11.816121 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:11.816556 kubelet[2337]: E0129 16:14:11.816527 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 29 16:14:11.832699 kubelet[2337]: E0129 16:14:11.832678 2337 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:14:11.915670 kubelet[2337]: E0129 16:14:11.915525 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Jan 29 16:14:12.018480 kubelet[2337]: I0129 16:14:12.018427 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:12.018845 kubelet[2337]: E0129 16:14:12.018813 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 29 16:14:12.033014 kubelet[2337]: E0129 16:14:12.032949 2337 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:14:12.316413 kubelet[2337]: E0129 16:14:12.316261 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Jan 29 16:14:12.421030 kubelet[2337]: I0129 16:14:12.421003 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:12.421345 kubelet[2337]: E0129 16:14:12.421316 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 29 16:14:12.433452 kubelet[2337]: E0129 16:14:12.433415 2337 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:14:12.559437 kubelet[2337]: W0129 16:14:12.559391 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.559437 kubelet[2337]: E0129 16:14:12.559430 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.712075 kubelet[2337]: I0129 16:14:12.711927 2337 policy_none.go:49] "None policy: Start" Jan 29 16:14:12.712964 kubelet[2337]: I0129 16:14:12.712933 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:14:12.713042 kubelet[2337]: I0129 16:14:12.712971 2337 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:14:12.767088 kubelet[2337]: W0129 16:14:12.766980 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.767190 kubelet[2337]: E0129 16:14:12.767107 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.767520 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:14:12.782226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:14:12.785252 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:14:12.802667 kubelet[2337]: I0129 16:14:12.802625 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:14:12.803304 kubelet[2337]: I0129 16:14:12.803022 2337 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:14:12.803304 kubelet[2337]: I0129 16:14:12.803255 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:14:12.804648 kubelet[2337]: E0129 16:14:12.804620 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:14:12.810286 kubelet[2337]: W0129 16:14:12.810246 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.810348 kubelet[2337]: E0129 16:14:12.810301 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.953876 kubelet[2337]: W0129 16:14:12.953806 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:12.953876 kubelet[2337]: E0129 16:14:12.953860 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:13.117319 kubelet[2337]: E0129 16:14:13.117147 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="1.6s" Jan 29 16:14:13.223358 kubelet[2337]: I0129 16:14:13.223316 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:13.223673 kubelet[2337]: E0129 16:14:13.223636 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Jan 29 16:14:13.233941 kubelet[2337]: I0129 16:14:13.233821 2337 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 16:14:13.235322 kubelet[2337]: I0129 16:14:13.235261 2337 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 16:14:13.236786 kubelet[2337]: I0129 16:14:13.236744 2337 topology_manager.go:215] "Topology Admit Handler" podUID="6e74863598c7abdf691100b5f7f3eac4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 16:14:13.247990 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 16:14:13.265464 systemd[1]: Created slice kubepods-burstable-pod6e74863598c7abdf691100b5f7f3eac4.slice - libcontainer container kubepods-burstable-pod6e74863598c7abdf691100b5f7f3eac4.slice. Jan 29 16:14:13.280207 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 16:14:13.322997 kubelet[2337]: I0129 16:14:13.322921 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:13.322997 kubelet[2337]: I0129 16:14:13.322976 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:13.322997 kubelet[2337]: I0129 16:14:13.322994 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:13.322997 kubelet[2337]: I0129 16:14:13.323011 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:13.323594 kubelet[2337]: I0129 16:14:13.323026 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:13.323594 kubelet[2337]: I0129 16:14:13.323039 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:13.323594 kubelet[2337]: I0129 16:14:13.323053 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:13.323594 kubelet[2337]: I0129 16:14:13.323068 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:13.323594 kubelet[2337]: I0129 16:14:13.323082 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:14:13.564354 kubelet[2337]: E0129 16:14:13.564174 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:13.565089 containerd[1514]: time="2025-01-29T16:14:13.565027337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:13.577437 kubelet[2337]: E0129 16:14:13.577390 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:13.577845 containerd[1514]: time="2025-01-29T16:14:13.577805209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e74863598c7abdf691100b5f7f3eac4,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:13.583084 kubelet[2337]: E0129 16:14:13.583062 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:13.583863 containerd[1514]: time="2025-01-29T16:14:13.583805423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:13.632440 kubelet[2337]: E0129 16:14:13.632389 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:14.031069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339036109.mount: Deactivated successfully. Jan 29 16:14:14.037798 containerd[1514]: time="2025-01-29T16:14:14.037715858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:14:14.042376 containerd[1514]: time="2025-01-29T16:14:14.042307609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:14:14.043462 containerd[1514]: time="2025-01-29T16:14:14.043401083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:14:14.044649 containerd[1514]: time="2025-01-29T16:14:14.044607855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:14:14.045705 containerd[1514]: time="2025-01-29T16:14:14.045596610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:14:14.046912 containerd[1514]: time="2025-01-29T16:14:14.046869191Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:14:14.048453 containerd[1514]: time="2025-01-29T16:14:14.048389147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:14:14.049985 containerd[1514]: time="2025-01-29T16:14:14.049946399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:14:14.052949 containerd[1514]: time="2025-01-29T16:14:14.052916893Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.05469ms" Jan 29 16:14:14.053652 containerd[1514]: time="2025-01-29T16:14:14.053610194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.433834ms" Jan 29 16:14:14.057210 containerd[1514]: time="2025-01-29T16:14:14.057148473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.275448ms" Jan 29 16:14:14.249454 containerd[1514]: time="2025-01-29T16:14:14.249311306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:14.249454 containerd[1514]: time="2025-01-29T16:14:14.249419663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:14.250175 containerd[1514]: time="2025-01-29T16:14:14.249443116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.250175 containerd[1514]: time="2025-01-29T16:14:14.249591656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.252381 containerd[1514]: time="2025-01-29T16:14:14.252091376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:14.252381 containerd[1514]: time="2025-01-29T16:14:14.252136388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:14.252381 containerd[1514]: time="2025-01-29T16:14:14.252153096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.252381 containerd[1514]: time="2025-01-29T16:14:14.252230292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.259303 containerd[1514]: time="2025-01-29T16:14:14.257569829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:14.259303 containerd[1514]: time="2025-01-29T16:14:14.257619914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:14.259303 containerd[1514]: time="2025-01-29T16:14:14.257634397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.259303 containerd[1514]: time="2025-01-29T16:14:14.257722216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:14.341621 systemd[1]: Started cri-containerd-27876e6504fd18b73feca3e366b7ba55af56842329c5f4a2ce975de36bf1a2ce.scope - libcontainer container 27876e6504fd18b73feca3e366b7ba55af56842329c5f4a2ce975de36bf1a2ce. Jan 29 16:14:14.362413 systemd[1]: Started cri-containerd-1112d338965075ca19d9f599b77b7671b938efe026518fca8c35cdc0bf06cdf8.scope - libcontainer container 1112d338965075ca19d9f599b77b7671b938efe026518fca8c35cdc0bf06cdf8. Jan 29 16:14:14.364607 systemd[1]: Started cri-containerd-33d8fb301adcba0e949a3ffffc33a1c07df1fdeb1a4be654766d49946d197117.scope - libcontainer container 33d8fb301adcba0e949a3ffffc33a1c07df1fdeb1a4be654766d49946d197117. Jan 29 16:14:14.399567 containerd[1514]: time="2025-01-29T16:14:14.399494278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e74863598c7abdf691100b5f7f3eac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"27876e6504fd18b73feca3e366b7ba55af56842329c5f4a2ce975de36bf1a2ce\"" Jan 29 16:14:14.401112 kubelet[2337]: E0129 16:14:14.401085 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.404166 containerd[1514]: time="2025-01-29T16:14:14.404132414Z" level=info msg="CreateContainer within sandbox \"27876e6504fd18b73feca3e366b7ba55af56842329c5f4a2ce975de36bf1a2ce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:14:14.513610 containerd[1514]: time="2025-01-29T16:14:14.513566471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1112d338965075ca19d9f599b77b7671b938efe026518fca8c35cdc0bf06cdf8\"" Jan 29 16:14:14.514533 kubelet[2337]: E0129 16:14:14.514497 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.516300 containerd[1514]: time="2025-01-29T16:14:14.516251384Z" level=info msg="CreateContainer within sandbox \"1112d338965075ca19d9f599b77b7671b938efe026518fca8c35cdc0bf06cdf8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:14:14.529083 containerd[1514]: time="2025-01-29T16:14:14.529036861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"33d8fb301adcba0e949a3ffffc33a1c07df1fdeb1a4be654766d49946d197117\"" Jan 29 16:14:14.529932 kubelet[2337]: E0129 16:14:14.529901 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.531877 containerd[1514]: time="2025-01-29T16:14:14.531850427Z" level=info msg="CreateContainer within sandbox \"33d8fb301adcba0e949a3ffffc33a1c07df1fdeb1a4be654766d49946d197117\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:14:14.576418 kubelet[2337]: W0129 16:14:14.576369 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:14.576506 kubelet[2337]: E0129 16:14:14.576427 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Jan 29 16:14:14.588382 containerd[1514]: time="2025-01-29T16:14:14.588336318Z" level=info msg="CreateContainer within sandbox \"27876e6504fd18b73feca3e366b7ba55af56842329c5f4a2ce975de36bf1a2ce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23b65d4db17a0ef193f760695500f3ec5bc9d12f9248573a4d8b531398191e73\"" Jan 29 16:14:14.588967 containerd[1514]: time="2025-01-29T16:14:14.588937589Z" level=info msg="StartContainer for \"23b65d4db17a0ef193f760695500f3ec5bc9d12f9248573a4d8b531398191e73\"" Jan 29 16:14:14.604652 containerd[1514]: time="2025-01-29T16:14:14.604533355Z" level=info msg="CreateContainer within sandbox \"33d8fb301adcba0e949a3ffffc33a1c07df1fdeb1a4be654766d49946d197117\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"363a8b5af5f094942a730cbdeaa7bd6a57dc4b3ae8db55b324721ca6de94b2cb\"" Jan 29 16:14:14.605196 containerd[1514]: time="2025-01-29T16:14:14.605171439Z" level=info msg="StartContainer for \"363a8b5af5f094942a730cbdeaa7bd6a57dc4b3ae8db55b324721ca6de94b2cb\"" Jan 29 16:14:14.606094 containerd[1514]: time="2025-01-29T16:14:14.606062812Z" level=info msg="CreateContainer within sandbox \"1112d338965075ca19d9f599b77b7671b938efe026518fca8c35cdc0bf06cdf8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ce539c160d8abd3932aee94b4d5a4856747078877e0cf2099603f04e0e3c848\"" Jan 29 16:14:14.606599 containerd[1514]: time="2025-01-29T16:14:14.606563594Z" level=info msg="StartContainer for \"7ce539c160d8abd3932aee94b4d5a4856747078877e0cf2099603f04e0e3c848\"" Jan 29 16:14:14.621582 systemd[1]: Started cri-containerd-23b65d4db17a0ef193f760695500f3ec5bc9d12f9248573a4d8b531398191e73.scope - libcontainer container 23b65d4db17a0ef193f760695500f3ec5bc9d12f9248573a4d8b531398191e73. Jan 29 16:14:14.637497 systemd[1]: Started cri-containerd-363a8b5af5f094942a730cbdeaa7bd6a57dc4b3ae8db55b324721ca6de94b2cb.scope - libcontainer container 363a8b5af5f094942a730cbdeaa7bd6a57dc4b3ae8db55b324721ca6de94b2cb. Jan 29 16:14:14.648488 systemd[1]: Started cri-containerd-7ce539c160d8abd3932aee94b4d5a4856747078877e0cf2099603f04e0e3c848.scope - libcontainer container 7ce539c160d8abd3932aee94b4d5a4856747078877e0cf2099603f04e0e3c848. Jan 29 16:14:14.683056 containerd[1514]: time="2025-01-29T16:14:14.683010807Z" level=info msg="StartContainer for \"23b65d4db17a0ef193f760695500f3ec5bc9d12f9248573a4d8b531398191e73\" returns successfully" Jan 29 16:14:14.707079 containerd[1514]: time="2025-01-29T16:14:14.706313928Z" level=info msg="StartContainer for \"363a8b5af5f094942a730cbdeaa7bd6a57dc4b3ae8db55b324721ca6de94b2cb\" returns successfully" Jan 29 16:14:14.709599 containerd[1514]: time="2025-01-29T16:14:14.709554919Z" level=info msg="StartContainer for \"7ce539c160d8abd3932aee94b4d5a4856747078877e0cf2099603f04e0e3c848\" returns successfully" Jan 29 16:14:14.759973 kubelet[2337]: E0129 16:14:14.759920 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="3.2s" Jan 29 16:14:14.766603 kubelet[2337]: E0129 16:14:14.766572 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.770874 kubelet[2337]: E0129 16:14:14.770737 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.771861 kubelet[2337]: E0129 16:14:14.771806 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:14.825521 kubelet[2337]: I0129 16:14:14.825343 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:15.774092 kubelet[2337]: E0129 16:14:15.774027 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:16.455295 kubelet[2337]: I0129 16:14:16.455234 2337 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 16:14:16.703865 kubelet[2337]: I0129 16:14:16.703813 2337 apiserver.go:52] "Watching apiserver" Jan 29 16:14:16.715096 kubelet[2337]: I0129 16:14:16.714980 2337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:14:17.656521 kubelet[2337]: E0129 16:14:17.656470 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:17.776563 kubelet[2337]: E0129 16:14:17.776508 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:18.153949 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-7.scope)... Jan 29 16:14:18.153967 systemd[1]: Reloading... Jan 29 16:14:18.266321 zram_generator::config[2669]: No configuration found. Jan 29 16:14:18.374803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:14:18.495755 systemd[1]: Reloading finished in 341 ms. Jan 29 16:14:18.525162 kubelet[2337]: I0129 16:14:18.525080 2337 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:14:18.525228 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:18.544724 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:14:18.545052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:18.545126 systemd[1]: kubelet.service: Consumed 1.063s CPU time, 118.5M memory peak. Jan 29 16:14:18.554687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:14:18.726724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:14:18.732887 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:14:18.774056 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:14:18.774056 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:14:18.774056 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:14:18.774056 kubelet[2705]: I0129 16:14:18.774014 2705 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:14:18.781238 kubelet[2705]: I0129 16:14:18.781198 2705 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:14:18.781238 kubelet[2705]: I0129 16:14:18.781226 2705 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:14:18.781418 kubelet[2705]: I0129 16:14:18.781395 2705 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:14:18.782591 kubelet[2705]: I0129 16:14:18.782569 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:14:18.783741 kubelet[2705]: I0129 16:14:18.783714 2705 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:14:18.791376 kubelet[2705]: I0129 16:14:18.791340 2705 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:14:18.793828 kubelet[2705]: I0129 16:14:18.793785 2705 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:14:18.794542 kubelet[2705]: I0129 16:14:18.793830 2705 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:14:18.794542 kubelet[2705]: I0129 16:14:18.794017 2705 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:14:18.794542 kubelet[2705]: I0129 16:14:18.794027 2705 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:14:18.794542 kubelet[2705]: I0129 16:14:18.794072 2705 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:14:18.794542 kubelet[2705]: I0129 16:14:18.794168 2705 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:14:18.794917 kubelet[2705]: I0129 16:14:18.794186 2705 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:14:18.794917 kubelet[2705]: I0129 16:14:18.794207 2705 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:14:18.794917 kubelet[2705]: I0129 16:14:18.794222 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:14:18.795712 kubelet[2705]: I0129 16:14:18.795676 2705 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:14:18.795879 kubelet[2705]: I0129 16:14:18.795852 2705 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:14:18.796242 kubelet[2705]: I0129 16:14:18.796214 2705 server.go:1264] "Started kubelet" Jan 29 16:14:18.799673 kubelet[2705]: I0129 16:14:18.799649 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:14:18.807919 kubelet[2705]: I0129 16:14:18.807857 2705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:14:18.808059 kubelet[2705]: I0129 16:14:18.807983 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:14:18.808407 kubelet[2705]: I0129 16:14:18.808377 2705 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:14:18.808864 kubelet[2705]: I0129 16:14:18.808844 2705 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:14:18.809850 kubelet[2705]: I0129 16:14:18.809832 2705 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:14:18.810728 kubelet[2705]: I0129 16:14:18.810713 2705 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:14:18.813164 kubelet[2705]: I0129 16:14:18.811473 2705 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:14:18.813164 kubelet[2705]: I0129 16:14:18.811580 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:14:18.813164 kubelet[2705]: I0129 16:14:18.812921 2705 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:14:18.813580 kubelet[2705]: E0129 16:14:18.813563 2705 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:14:18.814571 kubelet[2705]: I0129 16:14:18.814552 2705 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:14:18.821137 kubelet[2705]: I0129 16:14:18.821056 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:14:18.822451 kubelet[2705]: I0129 16:14:18.822430 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:14:18.822505 kubelet[2705]: I0129 16:14:18.822467 2705 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:14:18.822505 kubelet[2705]: I0129 16:14:18.822486 2705 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:14:18.822557 kubelet[2705]: E0129 16:14:18.822531 2705 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:14:18.858346 kubelet[2705]: I0129 16:14:18.858297 2705 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:14:18.858346 kubelet[2705]: I0129 16:14:18.858316 2705 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:14:18.858346 kubelet[2705]: I0129 16:14:18.858337 2705 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:14:18.858553 kubelet[2705]: I0129 16:14:18.858504 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:14:18.858553 kubelet[2705]: I0129 16:14:18.858516 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:14:18.858553 kubelet[2705]: I0129 16:14:18.858533 2705 policy_none.go:49] "None policy: Start" Jan 29 16:14:18.859399 kubelet[2705]: I0129 16:14:18.859342 2705 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:14:18.859399 kubelet[2705]: I0129 16:14:18.859385 2705 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:14:18.859607 kubelet[2705]: I0129 16:14:18.859588 2705 state_mem.go:75] "Updated machine memory state" Jan 29 16:14:18.864165 kubelet[2705]: I0129 16:14:18.864129 2705 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:14:18.864691 kubelet[2705]: I0129 16:14:18.864357 2705 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:14:18.864691 kubelet[2705]: I0129 16:14:18.864476 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:14:18.914625 kubelet[2705]: I0129 16:14:18.914576 2705 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:14:18.922758 kubelet[2705]: I0129 16:14:18.922692 2705 topology_manager.go:215] "Topology Admit Handler" podUID="6e74863598c7abdf691100b5f7f3eac4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 16:14:18.922863 kubelet[2705]: I0129 16:14:18.922829 2705 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 16:14:18.922952 kubelet[2705]: I0129 16:14:18.922924 2705 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 16:14:19.012785 kubelet[2705]: I0129 16:14:19.012730 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:19.012785 kubelet[2705]: I0129 16:14:19.012771 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:19.012785 kubelet[2705]: I0129 16:14:19.012793 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.013008 kubelet[2705]: I0129 16:14:19.012812 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.013008 kubelet[2705]: I0129 16:14:19.012831 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e74863598c7abdf691100b5f7f3eac4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e74863598c7abdf691100b5f7f3eac4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:19.013008 kubelet[2705]: I0129 16:14:19.012845 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.013008 kubelet[2705]: I0129 16:14:19.012858 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.013008 kubelet[2705]: I0129 16:14:19.012872 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.013158 kubelet[2705]: I0129 16:14:19.012892 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:14:19.028653 kubelet[2705]: E0129 16:14:19.027733 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:19.045802 kubelet[2705]: I0129 16:14:19.045764 2705 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 16:14:19.045954 kubelet[2705]: I0129 16:14:19.045859 2705 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 16:14:19.194114 sudo[2741]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:14:19.194522 sudo[2741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:14:19.320372 kubelet[2705]: E0129 16:14:19.319212 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.320645 kubelet[2705]: E0129 16:14:19.320618 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.329963 kubelet[2705]: E0129 16:14:19.329938 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.671177 sudo[2741]: pam_unix(sudo:session): session closed for user root Jan 29 16:14:19.794979 kubelet[2705]: I0129 16:14:19.794911 2705 apiserver.go:52] "Watching apiserver" Jan 29 16:14:19.809595 kubelet[2705]: I0129 16:14:19.809518 2705 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:14:19.842367 kubelet[2705]: E0129 16:14:19.842319 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.897880 kubelet[2705]: E0129 16:14:19.897832 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:14:19.898205 kubelet[2705]: E0129 16:14:19.898168 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:14:19.898535 kubelet[2705]: E0129 16:14:19.898501 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.905630 kubelet[2705]: E0129 16:14:19.905597 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:19.925795 kubelet[2705]: I0129 16:14:19.925583 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.92555632 podStartE2EDuration="2.92555632s" podCreationTimestamp="2025-01-29 16:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:19.89864446 +0000 UTC m=+1.161143715" watchObservedRunningTime="2025-01-29 16:14:19.92555632 +0000 UTC m=+1.188055565" Jan 29 16:14:19.946018 kubelet[2705]: I0129 16:14:19.945920 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9458975170000001 podStartE2EDuration="1.945897517s" podCreationTimestamp="2025-01-29 16:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:19.925778294 +0000 UTC m=+1.188277539" watchObservedRunningTime="2025-01-29 16:14:19.945897517 +0000 UTC m=+1.208396762" Jan 29 16:14:19.946018 kubelet[2705]: I0129 16:14:19.946014 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.94600918 podStartE2EDuration="1.94600918s" podCreationTimestamp="2025-01-29 16:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:19.94584836 +0000 UTC m=+1.208347605" watchObservedRunningTime="2025-01-29 16:14:19.94600918 +0000 UTC m=+1.208508445" Jan 29 16:14:20.844181 kubelet[2705]: E0129 16:14:20.844143 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:20.844181 kubelet[2705]: E0129 16:14:20.844170 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:21.286509 sudo[1700]: pam_unix(sudo:session): session closed for user root Jan 29 16:14:21.288263 sshd[1699]: Connection closed by 10.0.0.1 port 41284 Jan 29 16:14:21.291107 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:21.294682 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:41284.service: Deactivated successfully. Jan 29 16:14:21.297158 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:14:21.297420 systemd[1]: session-7.scope: Consumed 5.487s CPU time, 278M memory peak. Jan 29 16:14:21.299736 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:14:21.300687 systemd-logind[1503]: Removed session 7. Jan 29 16:14:21.846223 kubelet[2705]: E0129 16:14:21.846150 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:22.846910 kubelet[2705]: E0129 16:14:22.846872 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:24.859129 update_engine[1504]: I20250129 16:14:24.859032 1504 update_attempter.cc:509] Updating boot flags... Jan 29 16:14:25.122384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2790) Jan 29 16:14:25.169471 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2793) Jan 29 16:14:25.200371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2793) Jan 29 16:14:25.949393 kubelet[2705]: E0129 16:14:25.949355 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:26.752674 kubelet[2705]: E0129 16:14:26.752630 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:26.850922 kubelet[2705]: E0129 16:14:26.850873 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:26.851104 kubelet[2705]: E0129 16:14:26.850960 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:31.889932 kubelet[2705]: E0129 16:14:31.889888 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:32.659629 kubelet[2705]: I0129 16:14:32.659467 2705 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:14:32.659930 containerd[1514]: time="2025-01-29T16:14:32.659877614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:14:32.660344 kubelet[2705]: I0129 16:14:32.660190 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:14:32.860690 kubelet[2705]: E0129 16:14:32.860653 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.550705 kubelet[2705]: I0129 16:14:33.550642 2705 topology_manager.go:215] "Topology Admit Handler" podUID="c84016a7-3c93-4c94-90ed-6c634b8e8ce1" podNamespace="kube-system" podName="kube-proxy-mq5hz" Jan 29 16:14:33.556099 kubelet[2705]: I0129 16:14:33.555471 2705 topology_manager.go:215] "Topology Admit Handler" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" podNamespace="kube-system" podName="cilium-mfdjj" Jan 29 16:14:33.565972 systemd[1]: Created slice kubepods-besteffort-podc84016a7_3c93_4c94_90ed_6c634b8e8ce1.slice - libcontainer container kubepods-besteffort-podc84016a7_3c93_4c94_90ed_6c634b8e8ce1.slice. Jan 29 16:14:33.590047 systemd[1]: Created slice kubepods-burstable-pod59ea8f2b_ab7c_4214_a0e3_d0dd6ec85489.slice - libcontainer container kubepods-burstable-pod59ea8f2b_ab7c_4214_a0e3_d0dd6ec85489.slice. Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590352 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84016a7-3c93-4c94-90ed-6c634b8e8ce1-lib-modules\") pod \"kube-proxy-mq5hz\" (UID: \"c84016a7-3c93-4c94-90ed-6c634b8e8ce1\") " pod="kube-system/kube-proxy-mq5hz" Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590397 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-run\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590422 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-bpf-maps\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590445 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-config-path\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590468 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84016a7-3c93-4c94-90ed-6c634b8e8ce1-xtables-lock\") pod \"kube-proxy-mq5hz\" (UID: \"c84016a7-3c93-4c94-90ed-6c634b8e8ce1\") " pod="kube-system/kube-proxy-mq5hz" Jan 29 16:14:33.590964 kubelet[2705]: I0129 16:14:33.590491 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hubble-tls\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591240 kubelet[2705]: I0129 16:14:33.590511 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-etc-cni-netd\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591240 kubelet[2705]: I0129 16:14:33.590534 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-xtables-lock\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591240 kubelet[2705]: I0129 16:14:33.590557 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c84016a7-3c93-4c94-90ed-6c634b8e8ce1-kube-proxy\") pod \"kube-proxy-mq5hz\" (UID: \"c84016a7-3c93-4c94-90ed-6c634b8e8ce1\") " pod="kube-system/kube-proxy-mq5hz" Jan 29 16:14:33.591240 kubelet[2705]: I0129 16:14:33.590578 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvxs\" (UniqueName: \"kubernetes.io/projected/c84016a7-3c93-4c94-90ed-6c634b8e8ce1-kube-api-access-thvxs\") pod \"kube-proxy-mq5hz\" (UID: \"c84016a7-3c93-4c94-90ed-6c634b8e8ce1\") " pod="kube-system/kube-proxy-mq5hz" Jan 29 16:14:33.591240 kubelet[2705]: I0129 16:14:33.590603 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-clustermesh-secrets\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590635 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-kernel\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590656 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hostproc\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590678 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cni-path\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590699 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-net\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590721 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pjs\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-kube-api-access-v7pjs\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591437 kubelet[2705]: I0129 16:14:33.590741 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-cgroup\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.591659 kubelet[2705]: I0129 16:14:33.590763 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-lib-modules\") pod \"cilium-mfdjj\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " pod="kube-system/cilium-mfdjj" Jan 29 16:14:33.612883 kubelet[2705]: I0129 16:14:33.612641 2705 topology_manager.go:215] "Topology Admit Handler" podUID="0090dbaa-cb6d-4814-9ae1-912788b4cea2" podNamespace="kube-system" podName="cilium-operator-599987898-ljncj" Jan 29 16:14:33.621672 systemd[1]: Created slice kubepods-besteffort-pod0090dbaa_cb6d_4814_9ae1_912788b4cea2.slice - libcontainer container kubepods-besteffort-pod0090dbaa_cb6d_4814_9ae1_912788b4cea2.slice. Jan 29 16:14:33.692117 kubelet[2705]: I0129 16:14:33.691179 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0090dbaa-cb6d-4814-9ae1-912788b4cea2-cilium-config-path\") pod \"cilium-operator-599987898-ljncj\" (UID: \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\") " pod="kube-system/cilium-operator-599987898-ljncj" Jan 29 16:14:33.692117 kubelet[2705]: I0129 16:14:33.691240 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgk8k\" (UniqueName: \"kubernetes.io/projected/0090dbaa-cb6d-4814-9ae1-912788b4cea2-kube-api-access-rgk8k\") pod \"cilium-operator-599987898-ljncj\" (UID: \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\") " pod="kube-system/cilium-operator-599987898-ljncj" Jan 29 16:14:33.884817 kubelet[2705]: E0129 16:14:33.884629 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.885690 containerd[1514]: time="2025-01-29T16:14:33.885322132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mq5hz,Uid:c84016a7-3c93-4c94-90ed-6c634b8e8ce1,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:33.896159 kubelet[2705]: E0129 16:14:33.896111 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.896958 containerd[1514]: time="2025-01-29T16:14:33.896602771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfdjj,Uid:59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:33.926607 containerd[1514]: time="2025-01-29T16:14:33.926493164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:33.926607 containerd[1514]: time="2025-01-29T16:14:33.926560953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:33.926607 containerd[1514]: time="2025-01-29T16:14:33.926578749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.926893 containerd[1514]: time="2025-01-29T16:14:33.926660645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.926927 kubelet[2705]: E0129 16:14:33.926804 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.928054 containerd[1514]: time="2025-01-29T16:14:33.927559529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ljncj,Uid:0090dbaa-cb6d-4814-9ae1-912788b4cea2,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:33.929140 containerd[1514]: time="2025-01-29T16:14:33.929020485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:33.929140 containerd[1514]: time="2025-01-29T16:14:33.929099746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:33.929140 containerd[1514]: time="2025-01-29T16:14:33.929116489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.929424 containerd[1514]: time="2025-01-29T16:14:33.929303198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.951727 systemd[1]: Started cri-containerd-0247452ef00de20d67badff6015a36f26556c68ae5f312b3055b7e90fdf485f1.scope - libcontainer container 0247452ef00de20d67badff6015a36f26556c68ae5f312b3055b7e90fdf485f1. Jan 29 16:14:33.953493 systemd[1]: Started cri-containerd-74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491.scope - libcontainer container 74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491. Jan 29 16:14:33.968056 containerd[1514]: time="2025-01-29T16:14:33.967878131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:14:33.968204 containerd[1514]: time="2025-01-29T16:14:33.968058648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:14:33.968204 containerd[1514]: time="2025-01-29T16:14:33.968109601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.969008 containerd[1514]: time="2025-01-29T16:14:33.968961681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:14:33.990671 containerd[1514]: time="2025-01-29T16:14:33.990572619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfdjj,Uid:59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489,Namespace:kube-system,Attempt:0,} returns sandbox id \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\"" Jan 29 16:14:33.991796 kubelet[2705]: E0129 16:14:33.991769 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.992976 containerd[1514]: time="2025-01-29T16:14:33.992907818Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:14:33.993519 systemd[1]: Started cri-containerd-0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0.scope - libcontainer container 0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0. Jan 29 16:14:33.996359 containerd[1514]: time="2025-01-29T16:14:33.996199541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mq5hz,Uid:c84016a7-3c93-4c94-90ed-6c634b8e8ce1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0247452ef00de20d67badff6015a36f26556c68ae5f312b3055b7e90fdf485f1\"" Jan 29 16:14:33.997429 kubelet[2705]: E0129 16:14:33.997246 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:33.999727 containerd[1514]: time="2025-01-29T16:14:33.999657449Z" level=info msg="CreateContainer within sandbox \"0247452ef00de20d67badff6015a36f26556c68ae5f312b3055b7e90fdf485f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:14:34.041755 containerd[1514]: time="2025-01-29T16:14:34.041672336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ljncj,Uid:0090dbaa-cb6d-4814-9ae1-912788b4cea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\"" Jan 29 16:14:34.042695 kubelet[2705]: E0129 16:14:34.042659 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:34.044908 containerd[1514]: time="2025-01-29T16:14:34.044845747Z" level=info msg="CreateContainer within sandbox \"0247452ef00de20d67badff6015a36f26556c68ae5f312b3055b7e90fdf485f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1aa83313ae9899539a93990174b234e7bb51817224371d356b3e0b59a6e0ee96\"" Jan 29 16:14:34.045589 containerd[1514]: time="2025-01-29T16:14:34.045545863Z" level=info msg="StartContainer for \"1aa83313ae9899539a93990174b234e7bb51817224371d356b3e0b59a6e0ee96\"" Jan 29 16:14:34.079424 systemd[1]: Started cri-containerd-1aa83313ae9899539a93990174b234e7bb51817224371d356b3e0b59a6e0ee96.scope - libcontainer container 1aa83313ae9899539a93990174b234e7bb51817224371d356b3e0b59a6e0ee96. Jan 29 16:14:34.234698 containerd[1514]: time="2025-01-29T16:14:34.234629861Z" level=info msg="StartContainer for \"1aa83313ae9899539a93990174b234e7bb51817224371d356b3e0b59a6e0ee96\" returns successfully" Jan 29 16:14:34.866742 kubelet[2705]: E0129 16:14:34.866673 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:34.934859 kubelet[2705]: I0129 16:14:34.934449 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mq5hz" podStartSLOduration=1.9344234679999999 podStartE2EDuration="1.934423468s" podCreationTimestamp="2025-01-29 16:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:34.934315861 +0000 UTC m=+16.196815106" watchObservedRunningTime="2025-01-29 16:14:34.934423468 +0000 UTC m=+16.196922713" Jan 29 16:14:44.133617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971180119.mount: Deactivated successfully. Jan 29 16:14:45.964567 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:52678.service - OpenSSH per-connection server daemon (10.0.0.1:52678). Jan 29 16:14:46.021963 sshd[3100]: Accepted publickey for core from 10.0.0.1 port 52678 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:14:46.024410 sshd-session[3100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:46.031500 systemd-logind[1503]: New session 8 of user core. Jan 29 16:14:46.038496 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:14:46.173588 sshd[3106]: Connection closed by 10.0.0.1 port 52678 Jan 29 16:14:46.174004 sshd-session[3100]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:46.177999 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:52678.service: Deactivated successfully. Jan 29 16:14:46.180117 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:14:46.180864 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:14:46.181847 systemd-logind[1503]: Removed session 8. Jan 29 16:14:48.155699 containerd[1514]: time="2025-01-29T16:14:48.155621168Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:48.156639 containerd[1514]: time="2025-01-29T16:14:48.156539943Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:14:48.157831 containerd[1514]: time="2025-01-29T16:14:48.157797435Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:48.159649 containerd[1514]: time="2025-01-29T16:14:48.159610137Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.166664693s" Jan 29 16:14:48.159649 containerd[1514]: time="2025-01-29T16:14:48.159640196Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:14:48.160951 containerd[1514]: time="2025-01-29T16:14:48.160920883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:14:48.164870 containerd[1514]: time="2025-01-29T16:14:48.164838573Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:14:48.250834 containerd[1514]: time="2025-01-29T16:14:48.250726100Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\"" Jan 29 16:14:48.251571 containerd[1514]: time="2025-01-29T16:14:48.251528889Z" level=info msg="StartContainer for \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\"" Jan 29 16:14:48.294646 systemd[1]: Started cri-containerd-294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c.scope - libcontainer container 294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c. Jan 29 16:14:48.653022 systemd[1]: cri-containerd-294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c.scope: Deactivated successfully. Jan 29 16:14:49.054041 containerd[1514]: time="2025-01-29T16:14:49.053640052Z" level=info msg="StartContainer for \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\" returns successfully" Jan 29 16:14:49.190126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c-rootfs.mount: Deactivated successfully. Jan 29 16:14:49.447065 containerd[1514]: time="2025-01-29T16:14:49.446980248Z" level=info msg="shim disconnected" id=294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c namespace=k8s.io Jan 29 16:14:49.447065 containerd[1514]: time="2025-01-29T16:14:49.447048923Z" level=warning msg="cleaning up after shim disconnected" id=294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c namespace=k8s.io Jan 29 16:14:49.447065 containerd[1514]: time="2025-01-29T16:14:49.447057940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:14:50.059715 kubelet[2705]: E0129 16:14:50.059678 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:50.061415 containerd[1514]: time="2025-01-29T16:14:50.061375727Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:14:50.084428 containerd[1514]: time="2025-01-29T16:14:50.084365951Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\"" Jan 29 16:14:50.087574 containerd[1514]: time="2025-01-29T16:14:50.087529243Z" level=info msg="StartContainer for \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\"" Jan 29 16:14:50.123415 systemd[1]: Started cri-containerd-f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051.scope - libcontainer container f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051. Jan 29 16:14:50.155364 containerd[1514]: time="2025-01-29T16:14:50.155308547Z" level=info msg="StartContainer for \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\" returns successfully" Jan 29 16:14:50.169816 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:14:50.170084 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:14:50.170475 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:14:50.176937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:14:50.177244 systemd[1]: cri-containerd-f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051.scope: Deactivated successfully. Jan 29 16:14:50.193781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051-rootfs.mount: Deactivated successfully. Jan 29 16:14:50.195254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:14:50.201035 containerd[1514]: time="2025-01-29T16:14:50.200968747Z" level=info msg="shim disconnected" id=f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051 namespace=k8s.io Jan 29 16:14:50.201035 containerd[1514]: time="2025-01-29T16:14:50.201033063Z" level=warning msg="cleaning up after shim disconnected" id=f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051 namespace=k8s.io Jan 29 16:14:50.201165 containerd[1514]: time="2025-01-29T16:14:50.201042752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:14:50.306248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292470248.mount: Deactivated successfully. Jan 29 16:14:51.063325 kubelet[2705]: E0129 16:14:51.063256 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:51.065778 containerd[1514]: time="2025-01-29T16:14:51.065522242Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:14:51.089468 containerd[1514]: time="2025-01-29T16:14:51.089415583Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\"" Jan 29 16:14:51.090166 containerd[1514]: time="2025-01-29T16:14:51.090033304Z" level=info msg="StartContainer for \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\"" Jan 29 16:14:51.121418 systemd[1]: Started cri-containerd-8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f.scope - libcontainer container 8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f. Jan 29 16:14:51.162085 containerd[1514]: time="2025-01-29T16:14:51.161977417Z" level=info msg="StartContainer for \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\" returns successfully" Jan 29 16:14:51.163180 systemd[1]: cri-containerd-8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f.scope: Deactivated successfully. Jan 29 16:14:51.190582 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Jan 29 16:14:51.199050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f-rootfs.mount: Deactivated successfully. Jan 29 16:14:51.218037 containerd[1514]: time="2025-01-29T16:14:51.217957568Z" level=info msg="shim disconnected" id=8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f namespace=k8s.io Jan 29 16:14:51.218037 containerd[1514]: time="2025-01-29T16:14:51.218016314Z" level=warning msg="cleaning up after shim disconnected" id=8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f namespace=k8s.io Jan 29 16:14:51.218037 containerd[1514]: time="2025-01-29T16:14:51.218025992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:14:51.231680 containerd[1514]: time="2025-01-29T16:14:51.231626047Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:14:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:14:51.277259 sshd[3311]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:14:51.279632 sshd-session[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:51.284376 systemd-logind[1503]: New session 9 of user core. Jan 29 16:14:51.291417 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:14:51.423836 sshd[3326]: Connection closed by 10.0.0.1 port 51154 Jan 29 16:14:51.424325 sshd-session[3311]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:51.429227 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:51154.service: Deactivated successfully. Jan 29 16:14:51.431929 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:14:51.432670 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:14:51.433551 systemd-logind[1503]: Removed session 9. Jan 29 16:14:52.067235 kubelet[2705]: E0129 16:14:52.067194 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:52.069897 containerd[1514]: time="2025-01-29T16:14:52.069765773Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:14:52.117940 containerd[1514]: time="2025-01-29T16:14:52.117852628Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\"" Jan 29 16:14:52.118517 containerd[1514]: time="2025-01-29T16:14:52.118482911Z" level=info msg="StartContainer for \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\"" Jan 29 16:14:52.153532 systemd[1]: Started cri-containerd-ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6.scope - libcontainer container ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6. Jan 29 16:14:52.182256 systemd[1]: cri-containerd-ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6.scope: Deactivated successfully. Jan 29 16:14:52.184933 containerd[1514]: time="2025-01-29T16:14:52.184882281Z" level=info msg="StartContainer for \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\" returns successfully" Jan 29 16:14:52.207902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6-rootfs.mount: Deactivated successfully. Jan 29 16:14:52.213619 containerd[1514]: time="2025-01-29T16:14:52.213550185Z" level=info msg="shim disconnected" id=ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6 namespace=k8s.io Jan 29 16:14:52.213619 containerd[1514]: time="2025-01-29T16:14:52.213612597Z" level=warning msg="cleaning up after shim disconnected" id=ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6 namespace=k8s.io Jan 29 16:14:52.213619 containerd[1514]: time="2025-01-29T16:14:52.213624260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:14:53.071553 kubelet[2705]: E0129 16:14:53.071492 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:53.074765 containerd[1514]: time="2025-01-29T16:14:53.074714550Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:14:53.385131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327276774.mount: Deactivated successfully. Jan 29 16:14:53.390410 containerd[1514]: time="2025-01-29T16:14:53.390374078Z" level=info msg="CreateContainer within sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\"" Jan 29 16:14:53.393321 containerd[1514]: time="2025-01-29T16:14:53.390946205Z" level=info msg="StartContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\"" Jan 29 16:14:53.400640 containerd[1514]: time="2025-01-29T16:14:53.400589513Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:53.401967 containerd[1514]: time="2025-01-29T16:14:53.401857562Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:14:53.403742 containerd[1514]: time="2025-01-29T16:14:53.402875661Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:14:53.404216 containerd[1514]: time="2025-01-29T16:14:53.404186022Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.243228927s" Jan 29 16:14:53.404316 containerd[1514]: time="2025-01-29T16:14:53.404217173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:14:53.409066 containerd[1514]: time="2025-01-29T16:14:53.408647081Z" level=info msg="CreateContainer within sandbox \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:14:53.430408 containerd[1514]: time="2025-01-29T16:14:53.430344818Z" level=info msg="CreateContainer within sandbox \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\"" Jan 29 16:14:53.430521 systemd[1]: Started cri-containerd-16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997.scope - libcontainer container 16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997. Jan 29 16:14:53.432636 containerd[1514]: time="2025-01-29T16:14:53.432528095Z" level=info msg="StartContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\"" Jan 29 16:14:53.465527 systemd[1]: Started cri-containerd-ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832.scope - libcontainer container ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832. Jan 29 16:14:53.474639 containerd[1514]: time="2025-01-29T16:14:53.474584872Z" level=info msg="StartContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" returns successfully" Jan 29 16:14:53.507843 containerd[1514]: time="2025-01-29T16:14:53.507779489Z" level=info msg="StartContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" returns successfully" Jan 29 16:14:53.602585 kubelet[2705]: I0129 16:14:53.602544 2705 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:14:53.645321 kubelet[2705]: I0129 16:14:53.644859 2705 topology_manager.go:215] "Topology Admit Handler" podUID="4839dd7a-80ff-4a0d-9799-384b5a86a9ff" podNamespace="kube-system" podName="coredns-7db6d8ff4d-46jbb" Jan 29 16:14:53.646503 kubelet[2705]: I0129 16:14:53.646048 2705 topology_manager.go:215] "Topology Admit Handler" podUID="e5c40b88-0a5c-40b8-8cca-c6b357e10ffb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sh2pn" Jan 29 16:14:53.658941 systemd[1]: Created slice kubepods-burstable-pod4839dd7a_80ff_4a0d_9799_384b5a86a9ff.slice - libcontainer container kubepods-burstable-pod4839dd7a_80ff_4a0d_9799_384b5a86a9ff.slice. Jan 29 16:14:53.671447 systemd[1]: Created slice kubepods-burstable-pode5c40b88_0a5c_40b8_8cca_c6b357e10ffb.slice - libcontainer container kubepods-burstable-pode5c40b88_0a5c_40b8_8cca_c6b357e10ffb.slice. Jan 29 16:14:53.721974 kubelet[2705]: I0129 16:14:53.721863 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4839dd7a-80ff-4a0d-9799-384b5a86a9ff-config-volume\") pod \"coredns-7db6d8ff4d-46jbb\" (UID: \"4839dd7a-80ff-4a0d-9799-384b5a86a9ff\") " pod="kube-system/coredns-7db6d8ff4d-46jbb" Jan 29 16:14:53.722186 kubelet[2705]: I0129 16:14:53.721988 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcxwq\" (UniqueName: \"kubernetes.io/projected/4839dd7a-80ff-4a0d-9799-384b5a86a9ff-kube-api-access-lcxwq\") pod \"coredns-7db6d8ff4d-46jbb\" (UID: \"4839dd7a-80ff-4a0d-9799-384b5a86a9ff\") " pod="kube-system/coredns-7db6d8ff4d-46jbb" Jan 29 16:14:53.722186 kubelet[2705]: I0129 16:14:53.722068 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5c40b88-0a5c-40b8-8cca-c6b357e10ffb-config-volume\") pod \"coredns-7db6d8ff4d-sh2pn\" (UID: \"e5c40b88-0a5c-40b8-8cca-c6b357e10ffb\") " pod="kube-system/coredns-7db6d8ff4d-sh2pn" Jan 29 16:14:53.722186 kubelet[2705]: I0129 16:14:53.722089 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrx4\" (UniqueName: \"kubernetes.io/projected/e5c40b88-0a5c-40b8-8cca-c6b357e10ffb-kube-api-access-xsrx4\") pod \"coredns-7db6d8ff4d-sh2pn\" (UID: \"e5c40b88-0a5c-40b8-8cca-c6b357e10ffb\") " pod="kube-system/coredns-7db6d8ff4d-sh2pn" Jan 29 16:14:53.965109 kubelet[2705]: E0129 16:14:53.965035 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:53.966136 containerd[1514]: time="2025-01-29T16:14:53.966082976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46jbb,Uid:4839dd7a-80ff-4a0d-9799-384b5a86a9ff,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:53.974347 kubelet[2705]: E0129 16:14:53.974317 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:53.975739 containerd[1514]: time="2025-01-29T16:14:53.975146239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sh2pn,Uid:e5c40b88-0a5c-40b8-8cca-c6b357e10ffb,Namespace:kube-system,Attempt:0,}" Jan 29 16:14:54.075549 kubelet[2705]: E0129 16:14:54.075500 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:54.079559 kubelet[2705]: E0129 16:14:54.079530 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:54.442308 kubelet[2705]: I0129 16:14:54.442213 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ljncj" podStartSLOduration=2.080468204 podStartE2EDuration="21.442191079s" podCreationTimestamp="2025-01-29 16:14:33 +0000 UTC" firstStartedPulling="2025-01-29 16:14:34.043366854 +0000 UTC m=+15.305866099" lastFinishedPulling="2025-01-29 16:14:53.405089729 +0000 UTC m=+34.667588974" observedRunningTime="2025-01-29 16:14:54.194818186 +0000 UTC m=+35.457317421" watchObservedRunningTime="2025-01-29 16:14:54.442191079 +0000 UTC m=+35.704690334" Jan 29 16:14:55.081099 kubelet[2705]: E0129 16:14:55.081046 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:55.081682 kubelet[2705]: E0129 16:14:55.081136 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:56.082748 kubelet[2705]: E0129 16:14:56.082700 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:14:56.446137 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:51160.service - OpenSSH per-connection server daemon (10.0.0.1:51160). Jan 29 16:14:56.507145 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 51160 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:14:56.509178 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:14:56.514827 systemd-logind[1503]: New session 10 of user core. Jan 29 16:14:56.520474 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:14:56.642205 sshd[3587]: Connection closed by 10.0.0.1 port 51160 Jan 29 16:14:56.642598 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Jan 29 16:14:56.646747 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:51160.service: Deactivated successfully. Jan 29 16:14:56.648870 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:14:56.649588 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:14:56.650637 systemd-logind[1503]: Removed session 10. Jan 29 16:14:56.978749 systemd-networkd[1441]: cilium_host: Link UP Jan 29 16:14:56.979149 systemd-networkd[1441]: cilium_net: Link UP Jan 29 16:14:56.979635 systemd-networkd[1441]: cilium_net: Gained carrier Jan 29 16:14:56.980012 systemd-networkd[1441]: cilium_host: Gained carrier Jan 29 16:14:57.057466 systemd-networkd[1441]: cilium_host: Gained IPv6LL Jan 29 16:14:57.094060 systemd-networkd[1441]: cilium_vxlan: Link UP Jan 29 16:14:57.094074 systemd-networkd[1441]: cilium_vxlan: Gained carrier Jan 29 16:14:57.341315 kernel: NET: Registered PF_ALG protocol family Jan 29 16:14:57.369500 systemd-networkd[1441]: cilium_net: Gained IPv6LL Jan 29 16:14:58.041426 systemd-networkd[1441]: lxc_health: Link UP Jan 29 16:14:58.049879 systemd-networkd[1441]: lxc_health: Gained carrier Jan 29 16:14:58.257451 systemd-networkd[1441]: lxc34d0b82424e8: Link UP Jan 29 16:14:58.259306 kernel: eth0: renamed from tmpbecf1 Jan 29 16:14:58.273395 systemd-networkd[1441]: lxc2e40aaafb8cf: Link UP Jan 29 16:14:58.279794 systemd-networkd[1441]: lxc34d0b82424e8: Gained carrier Jan 29 16:14:58.281302 kernel: eth0: renamed from tmpc2cba Jan 29 16:14:58.285063 systemd-networkd[1441]: lxc2e40aaafb8cf: Gained carrier Jan 29 16:14:58.601461 systemd-networkd[1441]: cilium_vxlan: Gained IPv6LL Jan 29 16:14:59.625583 systemd-networkd[1441]: lxc2e40aaafb8cf: Gained IPv6LL Jan 29 16:14:59.689525 systemd-networkd[1441]: lxc34d0b82424e8: Gained IPv6LL Jan 29 16:14:59.900743 kubelet[2705]: E0129 16:14:59.900581 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:00.073453 systemd-networkd[1441]: lxc_health: Gained IPv6LL Jan 29 16:15:00.094163 kubelet[2705]: E0129 16:15:00.094118 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:00.318904 kubelet[2705]: I0129 16:15:00.318831 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfdjj" podStartSLOduration=13.15044817 podStartE2EDuration="27.318799422s" podCreationTimestamp="2025-01-29 16:14:33 +0000 UTC" firstStartedPulling="2025-01-29 16:14:33.99241131 +0000 UTC m=+15.254910565" lastFinishedPulling="2025-01-29 16:14:48.160762562 +0000 UTC m=+29.423261817" observedRunningTime="2025-01-29 16:14:54.457732846 +0000 UTC m=+35.720232091" watchObservedRunningTime="2025-01-29 16:15:00.318799422 +0000 UTC m=+41.581298667" Jan 29 16:15:01.668023 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Jan 29 16:15:01.717592 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:01.719442 sshd-session[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:01.724756 systemd-logind[1503]: New session 11 of user core. Jan 29 16:15:01.737571 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:15:01.844065 containerd[1514]: time="2025-01-29T16:15:01.843737012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:15:01.844065 containerd[1514]: time="2025-01-29T16:15:01.843811597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:15:01.844065 containerd[1514]: time="2025-01-29T16:15:01.843826636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:01.844065 containerd[1514]: time="2025-01-29T16:15:01.843964564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:01.845228 containerd[1514]: time="2025-01-29T16:15:01.845109328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:15:01.845317 containerd[1514]: time="2025-01-29T16:15:01.845245513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:15:01.845354 containerd[1514]: time="2025-01-29T16:15:01.845304878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:01.845494 containerd[1514]: time="2025-01-29T16:15:01.845444729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:01.869911 systemd[1]: run-containerd-runc-k8s.io-becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c-runc.a4UBDl.mount: Deactivated successfully. Jan 29 16:15:01.881198 sshd[3996]: Connection closed by 10.0.0.1 port 42956 Jan 29 16:15:01.881589 sshd-session[3994]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:01.883499 systemd[1]: Started cri-containerd-becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c.scope - libcontainer container becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c. Jan 29 16:15:01.885479 systemd[1]: Started cri-containerd-c2cba43437315e8e56935476fee88535d7e7ed8226dc150c1ab947379a6885db.scope - libcontainer container c2cba43437315e8e56935476fee88535d7e7ed8226dc150c1ab947379a6885db. Jan 29 16:15:01.889100 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:42956.service: Deactivated successfully. Jan 29 16:15:01.892539 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:15:01.895142 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:15:01.899459 systemd-logind[1503]: Removed session 11. Jan 29 16:15:01.901161 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:15:01.904484 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:15:01.931713 containerd[1514]: time="2025-01-29T16:15:01.931570393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sh2pn,Uid:e5c40b88-0a5c-40b8-8cca-c6b357e10ffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c\"" Jan 29 16:15:01.932578 kubelet[2705]: E0129 16:15:01.932546 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:01.935308 containerd[1514]: time="2025-01-29T16:15:01.935239491Z" level=info msg="CreateContainer within sandbox \"becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:15:01.935772 containerd[1514]: time="2025-01-29T16:15:01.935728882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46jbb,Uid:4839dd7a-80ff-4a0d-9799-384b5a86a9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2cba43437315e8e56935476fee88535d7e7ed8226dc150c1ab947379a6885db\"" Jan 29 16:15:01.936927 kubelet[2705]: E0129 16:15:01.936885 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:01.940820 containerd[1514]: time="2025-01-29T16:15:01.940774856Z" level=info msg="CreateContainer within sandbox \"c2cba43437315e8e56935476fee88535d7e7ed8226dc150c1ab947379a6885db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:15:01.957416 containerd[1514]: time="2025-01-29T16:15:01.957350100Z" level=info msg="CreateContainer within sandbox \"becf1eefc1853641ec3d8aa8818a71389e93860bcdfbb7800b634a7b0022074c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4be00c1fd254ad67c372a25f6e33375e9331ced60ea80e0e20ea0650cd33c2d4\"" Jan 29 16:15:01.957879 containerd[1514]: time="2025-01-29T16:15:01.957842066Z" level=info msg="StartContainer for \"4be00c1fd254ad67c372a25f6e33375e9331ced60ea80e0e20ea0650cd33c2d4\"" Jan 29 16:15:01.968073 containerd[1514]: time="2025-01-29T16:15:01.968013718Z" level=info msg="CreateContainer within sandbox \"c2cba43437315e8e56935476fee88535d7e7ed8226dc150c1ab947379a6885db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"351721883a6cb6896c51c5003e1b409214afd0c58460725aaea060b39fd185a6\"" Jan 29 16:15:01.968536 containerd[1514]: time="2025-01-29T16:15:01.968510563Z" level=info msg="StartContainer for \"351721883a6cb6896c51c5003e1b409214afd0c58460725aaea060b39fd185a6\"" Jan 29 16:15:01.989452 systemd[1]: Started cri-containerd-4be00c1fd254ad67c372a25f6e33375e9331ced60ea80e0e20ea0650cd33c2d4.scope - libcontainer container 4be00c1fd254ad67c372a25f6e33375e9331ced60ea80e0e20ea0650cd33c2d4. Jan 29 16:15:01.993327 systemd[1]: Started cri-containerd-351721883a6cb6896c51c5003e1b409214afd0c58460725aaea060b39fd185a6.scope - libcontainer container 351721883a6cb6896c51c5003e1b409214afd0c58460725aaea060b39fd185a6. Jan 29 16:15:02.031104 containerd[1514]: time="2025-01-29T16:15:02.031038347Z" level=info msg="StartContainer for \"351721883a6cb6896c51c5003e1b409214afd0c58460725aaea060b39fd185a6\" returns successfully" Jan 29 16:15:02.031314 containerd[1514]: time="2025-01-29T16:15:02.031049077Z" level=info msg="StartContainer for \"4be00c1fd254ad67c372a25f6e33375e9331ced60ea80e0e20ea0650cd33c2d4\" returns successfully" Jan 29 16:15:02.124449 kubelet[2705]: E0129 16:15:02.124409 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:02.129499 kubelet[2705]: E0129 16:15:02.129458 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:02.173037 kubelet[2705]: I0129 16:15:02.172824 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-46jbb" podStartSLOduration=29.172797498 podStartE2EDuration="29.172797498s" podCreationTimestamp="2025-01-29 16:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:02.171389755 +0000 UTC m=+43.433889000" watchObservedRunningTime="2025-01-29 16:15:02.172797498 +0000 UTC m=+43.435296744" Jan 29 16:15:02.173737 kubelet[2705]: I0129 16:15:02.173705 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sh2pn" podStartSLOduration=29.173515202 podStartE2EDuration="29.173515202s" podCreationTimestamp="2025-01-29 16:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:02.147655978 +0000 UTC m=+43.410155243" watchObservedRunningTime="2025-01-29 16:15:02.173515202 +0000 UTC m=+43.436014477" Jan 29 16:15:03.131769 kubelet[2705]: E0129 16:15:03.131675 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:03.132532 kubelet[2705]: E0129 16:15:03.132168 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:04.133779 kubelet[2705]: E0129 16:15:04.133736 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:04.133779 kubelet[2705]: E0129 16:15:04.133736 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:06.899301 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:42966.service - OpenSSH per-connection server daemon (10.0.0.1:42966). Jan 29 16:15:06.944876 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 42966 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:06.946424 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:06.950613 systemd-logind[1503]: New session 12 of user core. Jan 29 16:15:06.965414 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:15:07.091891 sshd[4179]: Connection closed by 10.0.0.1 port 42966 Jan 29 16:15:07.092392 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:07.105064 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:42966.service: Deactivated successfully. Jan 29 16:15:07.107039 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:15:07.108590 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:15:07.118840 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:42982.service - OpenSSH per-connection server daemon (10.0.0.1:42982). Jan 29 16:15:07.120110 systemd-logind[1503]: Removed session 12. Jan 29 16:15:07.156818 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 42982 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:07.158821 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:07.163318 systemd-logind[1503]: New session 13 of user core. Jan 29 16:15:07.171412 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:15:07.321993 sshd[4196]: Connection closed by 10.0.0.1 port 42982 Jan 29 16:15:07.322718 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:07.333198 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:42982.service: Deactivated successfully. Jan 29 16:15:07.336993 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:15:07.338808 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:15:07.345895 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:36404.service - OpenSSH per-connection server daemon (10.0.0.1:36404). Jan 29 16:15:07.346955 systemd-logind[1503]: Removed session 13. Jan 29 16:15:07.385124 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 36404 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:07.386693 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:07.391497 systemd-logind[1503]: New session 14 of user core. Jan 29 16:15:07.400534 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:15:07.515633 sshd[4209]: Connection closed by 10.0.0.1 port 36404 Jan 29 16:15:07.515947 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:07.520568 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:36404.service: Deactivated successfully. Jan 29 16:15:07.522946 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:15:07.523826 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:15:07.524733 systemd-logind[1503]: Removed session 14. Jan 29 16:15:12.529557 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:36410.service - OpenSSH per-connection server daemon (10.0.0.1:36410). Jan 29 16:15:12.572366 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 36410 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:12.573755 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:12.578221 systemd-logind[1503]: New session 15 of user core. Jan 29 16:15:12.592402 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:15:12.713927 sshd[4225]: Connection closed by 10.0.0.1 port 36410 Jan 29 16:15:12.714374 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:12.718871 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:36410.service: Deactivated successfully. Jan 29 16:15:12.721103 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:15:12.721801 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:15:12.722735 systemd-logind[1503]: Removed session 15. Jan 29 16:15:17.726655 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:59604.service - OpenSSH per-connection server daemon (10.0.0.1:59604). Jan 29 16:15:17.769833 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 59604 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:17.771709 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:17.775864 systemd-logind[1503]: New session 16 of user core. Jan 29 16:15:17.785488 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:15:17.894733 sshd[4240]: Connection closed by 10.0.0.1 port 59604 Jan 29 16:15:17.895148 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:17.909423 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:59604.service: Deactivated successfully. Jan 29 16:15:17.911669 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:15:17.913671 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:15:17.926865 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:59620.service - OpenSSH per-connection server daemon (10.0.0.1:59620). Jan 29 16:15:17.928141 systemd-logind[1503]: Removed session 16. Jan 29 16:15:17.969307 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 59620 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:17.971152 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:17.976783 systemd-logind[1503]: New session 17 of user core. Jan 29 16:15:17.984456 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:15:18.216158 sshd[4256]: Connection closed by 10.0.0.1 port 59620 Jan 29 16:15:18.216751 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:18.228853 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:59620.service: Deactivated successfully. Jan 29 16:15:18.231318 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:15:18.233376 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:15:18.242846 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:59628.service - OpenSSH per-connection server daemon (10.0.0.1:59628). Jan 29 16:15:18.244167 systemd-logind[1503]: Removed session 17. Jan 29 16:15:18.291342 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 59628 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:18.293408 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:18.299050 systemd-logind[1503]: New session 18 of user core. Jan 29 16:15:18.307417 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:15:20.054726 sshd[4269]: Connection closed by 10.0.0.1 port 59628 Jan 29 16:15:20.055532 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:20.065022 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:59628.service: Deactivated successfully. Jan 29 16:15:20.068059 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:15:20.068980 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:15:20.076766 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:59636.service - OpenSSH per-connection server daemon (10.0.0.1:59636). Jan 29 16:15:20.078822 systemd-logind[1503]: Removed session 18. Jan 29 16:15:20.117376 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 59636 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:20.119230 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:20.124541 systemd-logind[1503]: New session 19 of user core. Jan 29 16:15:20.135521 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:15:20.697635 sshd[4296]: Connection closed by 10.0.0.1 port 59636 Jan 29 16:15:20.699868 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:20.722339 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:59636.service: Deactivated successfully. Jan 29 16:15:20.737695 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:15:20.749666 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:15:20.760963 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:59650.service - OpenSSH per-connection server daemon (10.0.0.1:59650). Jan 29 16:15:20.762164 systemd-logind[1503]: Removed session 19. Jan 29 16:15:20.830546 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 59650 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:20.834091 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:20.858367 systemd-logind[1503]: New session 20 of user core. Jan 29 16:15:20.873320 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:15:21.090455 kernel: hrtimer: interrupt took 1910018 ns Jan 29 16:15:21.101173 sshd[4310]: Connection closed by 10.0.0.1 port 59650 Jan 29 16:15:21.103737 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:21.122101 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:59650.service: Deactivated successfully. Jan 29 16:15:21.128705 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:15:21.130903 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:15:21.140984 systemd-logind[1503]: Removed session 20. Jan 29 16:15:26.113586 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:59656.service - OpenSSH per-connection server daemon (10.0.0.1:59656). Jan 29 16:15:26.157486 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 59656 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:26.159337 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:26.164178 systemd-logind[1503]: New session 21 of user core. Jan 29 16:15:26.171536 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:15:26.278794 sshd[4326]: Connection closed by 10.0.0.1 port 59656 Jan 29 16:15:26.279156 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:26.283189 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:59656.service: Deactivated successfully. Jan 29 16:15:26.285582 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:15:26.286371 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:15:26.287407 systemd-logind[1503]: Removed session 21. Jan 29 16:15:29.824110 kubelet[2705]: E0129 16:15:29.824045 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:31.301647 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:40424.service - OpenSSH per-connection server daemon (10.0.0.1:40424). Jan 29 16:15:31.345251 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 40424 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:31.347059 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:31.351932 systemd-logind[1503]: New session 22 of user core. Jan 29 16:15:31.363618 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:15:31.477489 sshd[4344]: Connection closed by 10.0.0.1 port 40424 Jan 29 16:15:31.477928 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:31.482884 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:40424.service: Deactivated successfully. Jan 29 16:15:31.485611 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:15:31.486747 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:15:31.487796 systemd-logind[1503]: Removed session 22. Jan 29 16:15:36.491023 systemd[1]: Started sshd@22-10.0.0.32:22-10.0.0.1:40428.service - OpenSSH per-connection server daemon (10.0.0.1:40428). Jan 29 16:15:36.537200 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 40428 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:36.539630 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:36.545453 systemd-logind[1503]: New session 23 of user core. Jan 29 16:15:36.553531 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:15:36.669580 sshd[4362]: Connection closed by 10.0.0.1 port 40428 Jan 29 16:15:36.669982 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:36.674384 systemd[1]: sshd@22-10.0.0.32:22-10.0.0.1:40428.service: Deactivated successfully. Jan 29 16:15:36.676805 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:15:36.677869 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:15:36.678901 systemd-logind[1503]: Removed session 23. Jan 29 16:15:41.683349 systemd[1]: Started sshd@23-10.0.0.32:22-10.0.0.1:58116.service - OpenSSH per-connection server daemon (10.0.0.1:58116). Jan 29 16:15:41.726638 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 58116 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:41.728820 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:41.733843 systemd-logind[1503]: New session 24 of user core. Jan 29 16:15:41.743410 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:15:41.856022 sshd[4377]: Connection closed by 10.0.0.1 port 58116 Jan 29 16:15:41.856666 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:41.870775 systemd[1]: sshd@23-10.0.0.32:22-10.0.0.1:58116.service: Deactivated successfully. Jan 29 16:15:41.873448 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:15:41.875906 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:15:41.884573 systemd[1]: Started sshd@24-10.0.0.32:22-10.0.0.1:58130.service - OpenSSH per-connection server daemon (10.0.0.1:58130). Jan 29 16:15:41.885793 systemd-logind[1503]: Removed session 24. Jan 29 16:15:41.923735 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 58130 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:41.925528 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:41.929790 systemd-logind[1503]: New session 25 of user core. Jan 29 16:15:41.939441 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:15:43.472860 containerd[1514]: time="2025-01-29T16:15:43.472799609Z" level=info msg="StopContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" with timeout 30 (s)" Jan 29 16:15:43.473947 containerd[1514]: time="2025-01-29T16:15:43.473923414Z" level=info msg="Stop container \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" with signal terminated" Jan 29 16:15:43.492519 systemd[1]: run-containerd-runc-k8s.io-16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997-runc.fTjf4R.mount: Deactivated successfully. Jan 29 16:15:43.493591 systemd[1]: cri-containerd-ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832.scope: Deactivated successfully. Jan 29 16:15:43.515713 containerd[1514]: time="2025-01-29T16:15:43.515624067Z" level=info msg="StopContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" with timeout 2 (s)" Jan 29 16:15:43.518148 containerd[1514]: time="2025-01-29T16:15:43.515722805Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:15:43.518148 containerd[1514]: time="2025-01-29T16:15:43.517532162Z" level=info msg="Stop container \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" with signal terminated" Jan 29 16:15:43.518993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832-rootfs.mount: Deactivated successfully. Jan 29 16:15:43.526354 systemd-networkd[1441]: lxc_health: Link DOWN Jan 29 16:15:43.526365 systemd-networkd[1441]: lxc_health: Lost carrier Jan 29 16:15:43.536382 containerd[1514]: time="2025-01-29T16:15:43.535539794Z" level=info msg="shim disconnected" id=ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832 namespace=k8s.io Jan 29 16:15:43.536382 containerd[1514]: time="2025-01-29T16:15:43.535618583Z" level=warning msg="cleaning up after shim disconnected" id=ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832 namespace=k8s.io Jan 29 16:15:43.536382 containerd[1514]: time="2025-01-29T16:15:43.535637530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:43.546886 systemd[1]: cri-containerd-16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997.scope: Deactivated successfully. Jan 29 16:15:43.547396 systemd[1]: cri-containerd-16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997.scope: Consumed 7.210s CPU time, 123.2M memory peak, 160K read from disk, 13.3M written to disk. Jan 29 16:15:43.562944 containerd[1514]: time="2025-01-29T16:15:43.562881986Z" level=info msg="StopContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" returns successfully" Jan 29 16:15:43.567133 containerd[1514]: time="2025-01-29T16:15:43.567065853Z" level=info msg="StopPodSandbox for \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\"" Jan 29 16:15:43.571722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997-rootfs.mount: Deactivated successfully. Jan 29 16:15:43.578055 containerd[1514]: time="2025-01-29T16:15:43.577978429Z" level=info msg="shim disconnected" id=16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997 namespace=k8s.io Jan 29 16:15:43.578055 containerd[1514]: time="2025-01-29T16:15:43.578047851Z" level=warning msg="cleaning up after shim disconnected" id=16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997 namespace=k8s.io Jan 29 16:15:43.578327 containerd[1514]: time="2025-01-29T16:15:43.578059784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:43.587113 containerd[1514]: time="2025-01-29T16:15:43.567131878Z" level=info msg="Container to stop \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.590177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0-shm.mount: Deactivated successfully. Jan 29 16:15:43.594857 containerd[1514]: time="2025-01-29T16:15:43.593130438Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:15:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:15:43.595093 systemd[1]: cri-containerd-0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0.scope: Deactivated successfully. Jan 29 16:15:43.597182 containerd[1514]: time="2025-01-29T16:15:43.597133281Z" level=info msg="StopContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" returns successfully" Jan 29 16:15:43.597788 containerd[1514]: time="2025-01-29T16:15:43.597752677Z" level=info msg="StopPodSandbox for \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\"" Jan 29 16:15:43.597851 containerd[1514]: time="2025-01-29T16:15:43.597800328Z" level=info msg="Container to stop \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.597851 containerd[1514]: time="2025-01-29T16:15:43.597844702Z" level=info msg="Container to stop \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.597930 containerd[1514]: time="2025-01-29T16:15:43.597856204Z" level=info msg="Container to stop \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.597930 containerd[1514]: time="2025-01-29T16:15:43.597868387Z" level=info msg="Container to stop \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.597930 containerd[1514]: time="2025-01-29T16:15:43.597880079Z" level=info msg="Container to stop \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:15:43.604837 systemd[1]: cri-containerd-74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491.scope: Deactivated successfully. Jan 29 16:15:43.629771 containerd[1514]: time="2025-01-29T16:15:43.629709154Z" level=info msg="shim disconnected" id=0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0 namespace=k8s.io Jan 29 16:15:43.629771 containerd[1514]: time="2025-01-29T16:15:43.629765400Z" level=warning msg="cleaning up after shim disconnected" id=0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0 namespace=k8s.io Jan 29 16:15:43.629771 containerd[1514]: time="2025-01-29T16:15:43.629773846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:43.630060 containerd[1514]: time="2025-01-29T16:15:43.629721477Z" level=info msg="shim disconnected" id=74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491 namespace=k8s.io Jan 29 16:15:43.630060 containerd[1514]: time="2025-01-29T16:15:43.629846284Z" level=warning msg="cleaning up after shim disconnected" id=74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491 namespace=k8s.io Jan 29 16:15:43.630060 containerd[1514]: time="2025-01-29T16:15:43.629854930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:43.644483 containerd[1514]: time="2025-01-29T16:15:43.644421246Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:15:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:15:43.646097 containerd[1514]: time="2025-01-29T16:15:43.645970710Z" level=info msg="TearDown network for sandbox \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" successfully" Jan 29 16:15:43.646097 containerd[1514]: time="2025-01-29T16:15:43.645992962Z" level=info msg="StopPodSandbox for \"74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491\" returns successfully" Jan 29 16:15:43.646879 containerd[1514]: time="2025-01-29T16:15:43.646853216Z" level=info msg="TearDown network for sandbox \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\" successfully" Jan 29 16:15:43.646879 containerd[1514]: time="2025-01-29T16:15:43.646872693Z" level=info msg="StopPodSandbox for \"0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0\" returns successfully" Jan 29 16:15:43.824142 kubelet[2705]: I0129 16:15:43.823956 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-kernel\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824142 kubelet[2705]: I0129 16:15:43.824043 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7pjs\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-kube-api-access-v7pjs\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824142 kubelet[2705]: I0129 16:15:43.824072 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-config-path\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824142 kubelet[2705]: I0129 16:15:43.824090 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.824142 kubelet[2705]: I0129 16:15:43.824101 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-clustermesh-secrets\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824163 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-etc-cni-netd\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824181 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hostproc\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824195 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-net\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824210 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-cgroup\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824227 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hubble-tls\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824817 kubelet[2705]: I0129 16:15:43.824251 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-xtables-lock\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824313 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cni-path\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824333 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0090dbaa-cb6d-4814-9ae1-912788b4cea2-cilium-config-path\") pod \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\" (UID: \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824349 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgk8k\" (UniqueName: \"kubernetes.io/projected/0090dbaa-cb6d-4814-9ae1-912788b4cea2-kube-api-access-rgk8k\") pod \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\" (UID: \"0090dbaa-cb6d-4814-9ae1-912788b4cea2\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824364 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-bpf-maps\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824376 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-lib-modules\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.824970 kubelet[2705]: I0129 16:15:43.824425 2705 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-run\") pod \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\" (UID: \"59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489\") " Jan 29 16:15:43.825139 kubelet[2705]: I0129 16:15:43.824454 2705 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.825139 kubelet[2705]: I0129 16:15:43.824476 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.825139 kubelet[2705]: I0129 16:15:43.824495 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.825139 kubelet[2705]: I0129 16:15:43.824509 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hostproc" (OuterVolumeSpecName: "hostproc") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.825139 kubelet[2705]: I0129 16:15:43.824524 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.825320 kubelet[2705]: I0129 16:15:43.824538 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.828417 kubelet[2705]: I0129 16:15:43.827526 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.828417 kubelet[2705]: I0129 16:15:43.827598 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cni-path" (OuterVolumeSpecName: "cni-path") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.828595 kubelet[2705]: I0129 16:15:43.828450 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.828826 kubelet[2705]: I0129 16:15:43.828781 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:43.829918 kubelet[2705]: I0129 16:15:43.828955 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:43.829918 kubelet[2705]: I0129 16:15:43.829783 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:43.830874 kubelet[2705]: I0129 16:15:43.830826 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:43.831502 kubelet[2705]: I0129 16:15:43.831475 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-kube-api-access-v7pjs" (OuterVolumeSpecName: "kube-api-access-v7pjs") pod "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" (UID: "59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489"). InnerVolumeSpecName "kube-api-access-v7pjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:43.831824 kubelet[2705]: I0129 16:15:43.831784 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0090dbaa-cb6d-4814-9ae1-912788b4cea2-kube-api-access-rgk8k" (OuterVolumeSpecName: "kube-api-access-rgk8k") pod "0090dbaa-cb6d-4814-9ae1-912788b4cea2" (UID: "0090dbaa-cb6d-4814-9ae1-912788b4cea2"). InnerVolumeSpecName "kube-api-access-rgk8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:43.832500 kubelet[2705]: I0129 16:15:43.832455 2705 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0090dbaa-cb6d-4814-9ae1-912788b4cea2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0090dbaa-cb6d-4814-9ae1-912788b4cea2" (UID: "0090dbaa-cb6d-4814-9ae1-912788b4cea2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:43.888702 kubelet[2705]: E0129 16:15:43.888649 2705 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925381 2705 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925431 2705 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0090dbaa-cb6d-4814-9ae1-912788b4cea2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925445 2705 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rgk8k\" (UniqueName: \"kubernetes.io/projected/0090dbaa-cb6d-4814-9ae1-912788b4cea2-kube-api-access-rgk8k\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925454 2705 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925466 2705 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925477 2705 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925463 kubelet[2705]: I0129 16:15:43.925489 2705 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v7pjs\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-kube-api-access-v7pjs\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925498 2705 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925507 2705 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925515 2705 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925524 2705 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925532 2705 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925540 2705 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925548 2705 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:43.925756 kubelet[2705]: I0129 16:15:43.925561 2705 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 16:15:44.229252 kubelet[2705]: I0129 16:15:44.229215 2705 scope.go:117] "RemoveContainer" containerID="ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832" Jan 29 16:15:44.235041 systemd[1]: Removed slice kubepods-besteffort-pod0090dbaa_cb6d_4814_9ae1_912788b4cea2.slice - libcontainer container kubepods-besteffort-pod0090dbaa_cb6d_4814_9ae1_912788b4cea2.slice. Jan 29 16:15:44.237432 containerd[1514]: time="2025-01-29T16:15:44.237379745Z" level=info msg="RemoveContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\"" Jan 29 16:15:44.243040 containerd[1514]: time="2025-01-29T16:15:44.242897909Z" level=info msg="RemoveContainer for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" returns successfully" Jan 29 16:15:44.243223 kubelet[2705]: I0129 16:15:44.243189 2705 scope.go:117] "RemoveContainer" containerID="ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832" Jan 29 16:15:44.243340 systemd[1]: Removed slice kubepods-burstable-pod59ea8f2b_ab7c_4214_a0e3_d0dd6ec85489.slice - libcontainer container kubepods-burstable-pod59ea8f2b_ab7c_4214_a0e3_d0dd6ec85489.slice. Jan 29 16:15:44.243787 containerd[1514]: time="2025-01-29T16:15:44.243485216Z" level=error msg="ContainerStatus for \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\": not found" Jan 29 16:15:44.243828 kubelet[2705]: E0129 16:15:44.243706 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\": not found" containerID="ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832" Jan 29 16:15:44.243828 kubelet[2705]: I0129 16:15:44.243744 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832"} err="failed to get container status \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea61d349f3ab69a0eca9b53b3971a1a73758aeb7dfa2704ee975830faf37e832\": not found" Jan 29 16:15:44.243478 systemd[1]: kubepods-burstable-pod59ea8f2b_ab7c_4214_a0e3_d0dd6ec85489.slice: Consumed 7.328s CPU time, 123.5M memory peak, 184K read from disk, 13.3M written to disk. Jan 29 16:15:44.243984 kubelet[2705]: I0129 16:15:44.243845 2705 scope.go:117] "RemoveContainer" containerID="16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997" Jan 29 16:15:44.245225 containerd[1514]: time="2025-01-29T16:15:44.245176129Z" level=info msg="RemoveContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\"" Jan 29 16:15:44.255247 containerd[1514]: time="2025-01-29T16:15:44.255186027Z" level=info msg="RemoveContainer for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" returns successfully" Jan 29 16:15:44.255575 kubelet[2705]: I0129 16:15:44.255515 2705 scope.go:117] "RemoveContainer" containerID="ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6" Jan 29 16:15:44.257429 containerd[1514]: time="2025-01-29T16:15:44.257386698Z" level=info msg="RemoveContainer for \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\"" Jan 29 16:15:44.261919 containerd[1514]: time="2025-01-29T16:15:44.261787720Z" level=info msg="RemoveContainer for \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\" returns successfully" Jan 29 16:15:44.262470 kubelet[2705]: I0129 16:15:44.262340 2705 scope.go:117] "RemoveContainer" containerID="8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f" Jan 29 16:15:44.263516 containerd[1514]: time="2025-01-29T16:15:44.263477631Z" level=info msg="RemoveContainer for \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\"" Jan 29 16:15:44.267033 containerd[1514]: time="2025-01-29T16:15:44.266984053Z" level=info msg="RemoveContainer for \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\" returns successfully" Jan 29 16:15:44.267222 kubelet[2705]: I0129 16:15:44.267181 2705 scope.go:117] "RemoveContainer" containerID="f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051" Jan 29 16:15:44.268399 containerd[1514]: time="2025-01-29T16:15:44.268302037Z" level=info msg="RemoveContainer for \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\"" Jan 29 16:15:44.272379 containerd[1514]: time="2025-01-29T16:15:44.272339859Z" level=info msg="RemoveContainer for \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\" returns successfully" Jan 29 16:15:44.272547 kubelet[2705]: I0129 16:15:44.272519 2705 scope.go:117] "RemoveContainer" containerID="294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c" Jan 29 16:15:44.273606 containerd[1514]: time="2025-01-29T16:15:44.273565367Z" level=info msg="RemoveContainer for \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\"" Jan 29 16:15:44.277496 containerd[1514]: time="2025-01-29T16:15:44.277465938Z" level=info msg="RemoveContainer for \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\" returns successfully" Jan 29 16:15:44.277672 kubelet[2705]: I0129 16:15:44.277637 2705 scope.go:117] "RemoveContainer" containerID="16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997" Jan 29 16:15:44.277909 containerd[1514]: time="2025-01-29T16:15:44.277822606Z" level=error msg="ContainerStatus for \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\": not found" Jan 29 16:15:44.278046 kubelet[2705]: E0129 16:15:44.278013 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\": not found" containerID="16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997" Jan 29 16:15:44.278098 kubelet[2705]: I0129 16:15:44.278053 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997"} err="failed to get container status \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\": rpc error: code = NotFound desc = an error occurred when try to find container \"16e64723ae49e331d8b64cdaf5fd6605d0463c40a584db1fcb3b187b303f8997\": not found" Jan 29 16:15:44.278098 kubelet[2705]: I0129 16:15:44.278084 2705 scope.go:117] "RemoveContainer" containerID="ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6" Jan 29 16:15:44.278388 containerd[1514]: time="2025-01-29T16:15:44.278343636Z" level=error msg="ContainerStatus for \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\": not found" Jan 29 16:15:44.278577 kubelet[2705]: E0129 16:15:44.278477 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\": not found" containerID="ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6" Jan 29 16:15:44.278577 kubelet[2705]: I0129 16:15:44.278508 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6"} err="failed to get container status \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac0fdd0643dbf5c22c55ad72d09b2792f8bd8a49d576f16a8c080a31984679c6\": not found" Jan 29 16:15:44.278577 kubelet[2705]: I0129 16:15:44.278537 2705 scope.go:117] "RemoveContainer" containerID="8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f" Jan 29 16:15:44.278725 containerd[1514]: time="2025-01-29T16:15:44.278689132Z" level=error msg="ContainerStatus for \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\": not found" Jan 29 16:15:44.278831 kubelet[2705]: E0129 16:15:44.278807 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\": not found" containerID="8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f" Jan 29 16:15:44.278889 kubelet[2705]: I0129 16:15:44.278834 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f"} err="failed to get container status \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fae7fc51e65cca076505c3d194644aa7021029ee3803f7c241e0ef6b96a327f\": not found" Jan 29 16:15:44.278889 kubelet[2705]: I0129 16:15:44.278854 2705 scope.go:117] "RemoveContainer" containerID="f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051" Jan 29 16:15:44.279053 containerd[1514]: time="2025-01-29T16:15:44.279019179Z" level=error msg="ContainerStatus for \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\": not found" Jan 29 16:15:44.279174 kubelet[2705]: E0129 16:15:44.279149 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\": not found" containerID="f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051" Jan 29 16:15:44.279218 kubelet[2705]: I0129 16:15:44.279174 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051"} err="failed to get container status \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7393ef364b871b16903edc48384f1020d3998db4fd3a7dc5636931ac6056051\": not found" Jan 29 16:15:44.279218 kubelet[2705]: I0129 16:15:44.279193 2705 scope.go:117] "RemoveContainer" containerID="294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c" Jan 29 16:15:44.279468 containerd[1514]: time="2025-01-29T16:15:44.279425220Z" level=error msg="ContainerStatus for \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\": not found" Jan 29 16:15:44.279637 kubelet[2705]: E0129 16:15:44.279564 2705 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\": not found" containerID="294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c" Jan 29 16:15:44.279637 kubelet[2705]: I0129 16:15:44.279590 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c"} err="failed to get container status \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"294c02587deb817f972955f2f7a02ed4f848824325f0ccd1b62cae292f03ff4c\": not found" Jan 29 16:15:44.486254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c9da084958a6d0e8b9f8b27464b56dadb10c40eb39b818ecfe837cb30375da0-rootfs.mount: Deactivated successfully. Jan 29 16:15:44.486421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491-rootfs.mount: Deactivated successfully. Jan 29 16:15:44.486526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74766617345aa9a65249d80dd03d04d4cf03b41cc4546ae6b3a8ea7da4bda491-shm.mount: Deactivated successfully. Jan 29 16:15:44.486626 systemd[1]: var-lib-kubelet-pods-0090dbaa\x2dcb6d\x2d4814\x2d9ae1\x2d912788b4cea2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drgk8k.mount: Deactivated successfully. Jan 29 16:15:44.486715 systemd[1]: var-lib-kubelet-pods-59ea8f2b\x2dab7c\x2d4214\x2da0e3\x2dd0dd6ec85489-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7pjs.mount: Deactivated successfully. Jan 29 16:15:44.486794 systemd[1]: var-lib-kubelet-pods-59ea8f2b\x2dab7c\x2d4214\x2da0e3\x2dd0dd6ec85489-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:15:44.486885 systemd[1]: var-lib-kubelet-pods-59ea8f2b\x2dab7c\x2d4214\x2da0e3\x2dd0dd6ec85489-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:15:44.824344 kubelet[2705]: E0129 16:15:44.824181 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:44.826315 kubelet[2705]: I0129 16:15:44.826248 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0090dbaa-cb6d-4814-9ae1-912788b4cea2" path="/var/lib/kubelet/pods/0090dbaa-cb6d-4814-9ae1-912788b4cea2/volumes" Jan 29 16:15:44.826895 kubelet[2705]: I0129 16:15:44.826865 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" path="/var/lib/kubelet/pods/59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489/volumes" Jan 29 16:15:45.417135 sshd[4393]: Connection closed by 10.0.0.1 port 58130 Jan 29 16:15:45.417569 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:45.428798 systemd[1]: sshd@24-10.0.0.32:22-10.0.0.1:58130.service: Deactivated successfully. Jan 29 16:15:45.431328 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:15:45.433305 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:15:45.440663 systemd[1]: Started sshd@25-10.0.0.32:22-10.0.0.1:58146.service - OpenSSH per-connection server daemon (10.0.0.1:58146). Jan 29 16:15:45.441824 systemd-logind[1503]: Removed session 25. Jan 29 16:15:45.479456 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 58146 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:45.481169 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:45.486105 systemd-logind[1503]: New session 26 of user core. Jan 29 16:15:45.499442 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:15:45.999417 sshd[4557]: Connection closed by 10.0.0.1 port 58146 Jan 29 16:15:46.002267 sshd-session[4554]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:46.015344 systemd[1]: sshd@25-10.0.0.32:22-10.0.0.1:58146.service: Deactivated successfully. Jan 29 16:15:46.019504 systemd-logind[1503]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:15:46.024658 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:15:46.025887 kubelet[2705]: I0129 16:15:46.025834 2705 topology_manager.go:215] "Topology Admit Handler" podUID="beba7566-228e-4634-88cd-4a58446a90f9" podNamespace="kube-system" podName="cilium-j6vdh" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025923 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0090dbaa-cb6d-4814-9ae1-912788b4cea2" containerName="cilium-operator" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025939 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="mount-cgroup" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025953 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="mount-bpf-fs" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025962 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="clean-cilium-state" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025970 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="cilium-agent" Jan 29 16:15:46.026432 kubelet[2705]: E0129 16:15:46.025981 2705 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="apply-sysctl-overwrites" Jan 29 16:15:46.026432 kubelet[2705]: I0129 16:15:46.026027 2705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ea8f2b-ab7c-4214-a0e3-d0dd6ec85489" containerName="cilium-agent" Jan 29 16:15:46.026432 kubelet[2705]: I0129 16:15:46.026037 2705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0090dbaa-cb6d-4814-9ae1-912788b4cea2" containerName="cilium-operator" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039740 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beba7566-228e-4634-88cd-4a58446a90f9-cilium-config-path\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039794 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/beba7566-228e-4634-88cd-4a58446a90f9-cilium-ipsec-secrets\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039810 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beba7566-228e-4634-88cd-4a58446a90f9-hubble-tls\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039824 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-cilium-cgroup\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039839 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-host-proc-sys-kernel\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041197 kubelet[2705]: I0129 16:15:46.039852 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-xtables-lock\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.040595 systemd[1]: Started sshd@26-10.0.0.32:22-10.0.0.1:58158.service - OpenSSH per-connection server daemon (10.0.0.1:58158). Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039865 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm4t\" (UniqueName: \"kubernetes.io/projected/beba7566-228e-4634-88cd-4a58446a90f9-kube-api-access-gpm4t\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039879 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-cilium-run\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039893 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-lib-modules\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039906 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-bpf-maps\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039918 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-host-proc-sys-net\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041536 kubelet[2705]: I0129 16:15:46.039933 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-etc-cni-netd\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041670 kubelet[2705]: I0129 16:15:46.039949 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-hostproc\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041670 kubelet[2705]: I0129 16:15:46.039971 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beba7566-228e-4634-88cd-4a58446a90f9-cni-path\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.041670 kubelet[2705]: I0129 16:15:46.039985 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beba7566-228e-4634-88cd-4a58446a90f9-clustermesh-secrets\") pod \"cilium-j6vdh\" (UID: \"beba7566-228e-4634-88cd-4a58446a90f9\") " pod="kube-system/cilium-j6vdh" Jan 29 16:15:46.042354 systemd-logind[1503]: Removed session 26. Jan 29 16:15:46.055202 systemd[1]: Created slice kubepods-burstable-podbeba7566_228e_4634_88cd_4a58446a90f9.slice - libcontainer container kubepods-burstable-podbeba7566_228e_4634_88cd_4a58446a90f9.slice. Jan 29 16:15:46.086874 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 58158 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:46.088722 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:46.093552 systemd-logind[1503]: New session 27 of user core. Jan 29 16:15:46.099422 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:15:46.151992 sshd[4571]: Connection closed by 10.0.0.1 port 58158 Jan 29 16:15:46.152585 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:46.158557 systemd[1]: sshd@26-10.0.0.32:22-10.0.0.1:58158.service: Deactivated successfully. Jan 29 16:15:46.160713 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:15:46.175731 systemd-logind[1503]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:15:46.184710 systemd[1]: Started sshd@27-10.0.0.32:22-10.0.0.1:58170.service - OpenSSH per-connection server daemon (10.0.0.1:58170). Jan 29 16:15:46.186343 systemd-logind[1503]: Removed session 27. Jan 29 16:15:46.225468 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 58170 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:15:46.227163 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:46.232080 systemd-logind[1503]: New session 28 of user core. Jan 29 16:15:46.239539 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:15:46.360190 kubelet[2705]: E0129 16:15:46.360032 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:46.361049 containerd[1514]: time="2025-01-29T16:15:46.361001685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6vdh,Uid:beba7566-228e-4634-88cd-4a58446a90f9,Namespace:kube-system,Attempt:0,}" Jan 29 16:15:46.385940 containerd[1514]: time="2025-01-29T16:15:46.385836180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:15:46.385940 containerd[1514]: time="2025-01-29T16:15:46.385885093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:15:46.385940 containerd[1514]: time="2025-01-29T16:15:46.385898189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:46.386832 containerd[1514]: time="2025-01-29T16:15:46.386777490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:15:46.416547 systemd[1]: Started cri-containerd-2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab.scope - libcontainer container 2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab. Jan 29 16:15:46.439270 containerd[1514]: time="2025-01-29T16:15:46.439215615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6vdh,Uid:beba7566-228e-4634-88cd-4a58446a90f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\"" Jan 29 16:15:46.439975 kubelet[2705]: E0129 16:15:46.439944 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:46.442533 containerd[1514]: time="2025-01-29T16:15:46.442494900Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:15:46.455836 containerd[1514]: time="2025-01-29T16:15:46.455785809Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7\"" Jan 29 16:15:46.456259 containerd[1514]: time="2025-01-29T16:15:46.456209805Z" level=info msg="StartContainer for \"40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7\"" Jan 29 16:15:46.484469 systemd[1]: Started cri-containerd-40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7.scope - libcontainer container 40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7. Jan 29 16:15:46.513542 containerd[1514]: time="2025-01-29T16:15:46.513495145Z" level=info msg="StartContainer for \"40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7\" returns successfully" Jan 29 16:15:46.522995 systemd[1]: cri-containerd-40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7.scope: Deactivated successfully. Jan 29 16:15:46.556945 containerd[1514]: time="2025-01-29T16:15:46.556874418Z" level=info msg="shim disconnected" id=40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7 namespace=k8s.io Jan 29 16:15:46.556945 containerd[1514]: time="2025-01-29T16:15:46.556938369Z" level=warning msg="cleaning up after shim disconnected" id=40754ec0e06c34394151597b97f156851f2c98553d4f0d7dcde94cc288e1f7e7 namespace=k8s.io Jan 29 16:15:46.556945 containerd[1514]: time="2025-01-29T16:15:46.556949421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:47.244095 kubelet[2705]: E0129 16:15:47.244052 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:47.246245 containerd[1514]: time="2025-01-29T16:15:47.246081658Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:15:47.271738 containerd[1514]: time="2025-01-29T16:15:47.271667997Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036\"" Jan 29 16:15:47.272428 containerd[1514]: time="2025-01-29T16:15:47.272374671Z" level=info msg="StartContainer for \"db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036\"" Jan 29 16:15:47.310586 systemd[1]: Started cri-containerd-db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036.scope - libcontainer container db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036. Jan 29 16:15:47.340811 containerd[1514]: time="2025-01-29T16:15:47.340754134Z" level=info msg="StartContainer for \"db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036\" returns successfully" Jan 29 16:15:47.347667 systemd[1]: cri-containerd-db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036.scope: Deactivated successfully. Jan 29 16:15:47.377411 containerd[1514]: time="2025-01-29T16:15:47.377330964Z" level=info msg="shim disconnected" id=db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036 namespace=k8s.io Jan 29 16:15:47.377411 containerd[1514]: time="2025-01-29T16:15:47.377401068Z" level=warning msg="cleaning up after shim disconnected" id=db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036 namespace=k8s.io Jan 29 16:15:47.377411 containerd[1514]: time="2025-01-29T16:15:47.377412930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:48.149656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db26abde44aa3d355298726cb8cdd06dfc7d86e09dc8d6dfeae9e4c04593a036-rootfs.mount: Deactivated successfully. Jan 29 16:15:48.248056 kubelet[2705]: E0129 16:15:48.248003 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:48.250084 containerd[1514]: time="2025-01-29T16:15:48.250036737Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:15:48.274700 containerd[1514]: time="2025-01-29T16:15:48.274638958Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a\"" Jan 29 16:15:48.275427 containerd[1514]: time="2025-01-29T16:15:48.275368135Z" level=info msg="StartContainer for \"f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a\"" Jan 29 16:15:48.305409 systemd[1]: Started cri-containerd-f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a.scope - libcontainer container f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a. Jan 29 16:15:48.337665 systemd[1]: cri-containerd-f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a.scope: Deactivated successfully. Jan 29 16:15:48.338979 containerd[1514]: time="2025-01-29T16:15:48.338943557Z" level=info msg="StartContainer for \"f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a\" returns successfully" Jan 29 16:15:48.366464 containerd[1514]: time="2025-01-29T16:15:48.366365433Z" level=info msg="shim disconnected" id=f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a namespace=k8s.io Jan 29 16:15:48.366464 containerd[1514]: time="2025-01-29T16:15:48.366456867Z" level=warning msg="cleaning up after shim disconnected" id=f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a namespace=k8s.io Jan 29 16:15:48.366464 containerd[1514]: time="2025-01-29T16:15:48.366465764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:48.890091 kubelet[2705]: E0129 16:15:48.890040 2705 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:15:49.148572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6b25056b057d9b1dfb020306ac959d5b477f87feb767d9b36387747562aa87a-rootfs.mount: Deactivated successfully. Jan 29 16:15:49.251528 kubelet[2705]: E0129 16:15:49.251493 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:49.253106 containerd[1514]: time="2025-01-29T16:15:49.253021402Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:15:49.269619 containerd[1514]: time="2025-01-29T16:15:49.269571134Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968\"" Jan 29 16:15:49.270345 containerd[1514]: time="2025-01-29T16:15:49.270310020Z" level=info msg="StartContainer for \"97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968\"" Jan 29 16:15:49.307521 systemd[1]: Started cri-containerd-97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968.scope - libcontainer container 97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968. Jan 29 16:15:49.338315 systemd[1]: cri-containerd-97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968.scope: Deactivated successfully. Jan 29 16:15:49.370578 containerd[1514]: time="2025-01-29T16:15:49.370517333Z" level=info msg="StartContainer for \"97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968\" returns successfully" Jan 29 16:15:49.396778 containerd[1514]: time="2025-01-29T16:15:49.396701137Z" level=info msg="shim disconnected" id=97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968 namespace=k8s.io Jan 29 16:15:49.396778 containerd[1514]: time="2025-01-29T16:15:49.396759358Z" level=warning msg="cleaning up after shim disconnected" id=97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968 namespace=k8s.io Jan 29 16:15:49.396778 containerd[1514]: time="2025-01-29T16:15:49.396767704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:15:50.148733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97b95af24cffe1427e48d2e8f92a156c32fb5c5421e852eaeed94a96bdc24968-rootfs.mount: Deactivated successfully. Jan 29 16:15:50.260169 kubelet[2705]: E0129 16:15:50.260104 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:50.262885 containerd[1514]: time="2025-01-29T16:15:50.262831582Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:15:50.281981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239911278.mount: Deactivated successfully. Jan 29 16:15:50.284133 containerd[1514]: time="2025-01-29T16:15:50.284084294Z" level=info msg="CreateContainer within sandbox \"2db6ac08f307666e91dd3aa602b93100c929fb18cb396e8b8389b221751737ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8\"" Jan 29 16:15:50.284644 containerd[1514]: time="2025-01-29T16:15:50.284619693Z" level=info msg="StartContainer for \"6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8\"" Jan 29 16:15:50.322572 systemd[1]: Started cri-containerd-6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8.scope - libcontainer container 6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8. Jan 29 16:15:50.361028 containerd[1514]: time="2025-01-29T16:15:50.360865351Z" level=info msg="StartContainer for \"6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8\" returns successfully" Jan 29 16:15:50.822306 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:15:50.893468 kubelet[2705]: I0129 16:15:50.893409 2705 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:15:50Z","lastTransitionTime":"2025-01-29T16:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:15:51.265007 kubelet[2705]: E0129 16:15:51.264970 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:51.284747 kubelet[2705]: I0129 16:15:51.284661 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j6vdh" podStartSLOduration=6.284641193 podStartE2EDuration="6.284641193s" podCreationTimestamp="2025-01-29 16:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:51.284190655 +0000 UTC m=+92.546689900" watchObservedRunningTime="2025-01-29 16:15:51.284641193 +0000 UTC m=+92.547140438" Jan 29 16:15:52.361737 kubelet[2705]: E0129 16:15:52.361673 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:52.823640 kubelet[2705]: E0129 16:15:52.823295 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:53.824420 kubelet[2705]: E0129 16:15:53.824358 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:54.034778 systemd-networkd[1441]: lxc_health: Link UP Jan 29 16:15:54.037557 systemd-networkd[1441]: lxc_health: Gained carrier Jan 29 16:15:54.363078 kubelet[2705]: E0129 16:15:54.362372 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:54.735919 kubelet[2705]: E0129 16:15:54.735857 2705 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43474->127.0.0.1:42823: write tcp 127.0.0.1:43474->127.0.0.1:42823: write: broken pipe Jan 29 16:15:55.276251 kubelet[2705]: E0129 16:15:55.276200 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:55.561554 systemd-networkd[1441]: lxc_health: Gained IPv6LL Jan 29 16:15:56.277518 kubelet[2705]: E0129 16:15:56.277480 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:15:56.793405 systemd[1]: run-containerd-runc-k8s.io-6d21f62cd39d6eb0097f77196c99988f6961c49e415d565327b92479236f94f8-runc.wR9CMR.mount: Deactivated successfully. Jan 29 16:16:01.050514 sshd[4585]: Connection closed by 10.0.0.1 port 58170 Jan 29 16:16:01.051046 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Jan 29 16:16:01.056939 systemd[1]: sshd@27-10.0.0.32:22-10.0.0.1:58170.service: Deactivated successfully. Jan 29 16:16:01.060258 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:16:01.061212 systemd-logind[1503]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:16:01.062340 systemd-logind[1503]: Removed session 28.