Jan 29 11:47:43.926210 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:47:43.926231 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:47:43.926244 kernel: BIOS-provided physical RAM map: Jan 29 11:47:43.926251 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:47:43.926258 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:47:43.926265 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:47:43.926272 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:47:43.926278 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:47:43.926284 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:47:43.926293 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:47:43.926299 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:47:43.926305 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:47:43.926311 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:47:43.926317 kernel: NX (Execute Disable) protection: active Jan 29 11:47:43.926325 kernel: APIC: Static calls initialized Jan 29 11:47:43.926334 kernel: SMBIOS 2.8 present. Jan 29 11:47:43.926341 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:47:43.926347 kernel: Hypervisor detected: KVM Jan 29 11:47:43.926354 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:47:43.926360 kernel: kvm-clock: using sched offset of 2629517055 cycles Jan 29 11:47:43.926367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:47:43.926374 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:47:43.926381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:47:43.926389 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:47:43.926395 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:47:43.926404 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:47:43.926411 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:47:43.926418 kernel: Using GB pages for direct mapping Jan 29 11:47:43.926425 kernel: ACPI: Early table checksum verification disabled Jan 29 11:47:43.926432 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:47:43.926439 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926445 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926452 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926461 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:47:43.926468 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926475 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926482 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926488 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:43.926495 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:47:43.926502 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:47:43.926513 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:47:43.926522 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:47:43.926529 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:47:43.926536 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:47:43.926543 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:47:43.926639 kernel: No NUMA configuration found Jan 29 11:47:43.926647 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:47:43.926654 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:47:43.926664 kernel: Zone ranges: Jan 29 11:47:43.926672 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:47:43.926679 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:47:43.926686 kernel: Normal empty Jan 29 11:47:43.926694 kernel: Movable zone start for each node Jan 29 11:47:43.926701 kernel: Early memory node ranges Jan 29 11:47:43.926708 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:47:43.926715 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:47:43.926722 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:47:43.926732 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:47:43.926739 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:47:43.926746 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:47:43.926753 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:47:43.926760 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:47:43.926767 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:47:43.926774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:47:43.926782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:47:43.926789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:47:43.926798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:47:43.926805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:47:43.926812 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:47:43.926819 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:47:43.926826 kernel: TSC deadline timer available Jan 29 11:47:43.926833 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:47:43.926840 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:47:43.926847 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:47:43.926854 kernel: kvm-guest: setup PV sched yield Jan 29 11:47:43.926862 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:47:43.926871 kernel: Booting paravirtualized kernel on KVM Jan 29 11:47:43.926878 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:47:43.926886 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:47:43.926893 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:47:43.926900 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:47:43.926907 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:47:43.926914 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:47:43.926921 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:47:43.926936 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:47:43.926955 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:47:43.926964 kernel: random: crng init done Jan 29 11:47:43.926973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:47:43.926993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:47:43.927002 kernel: Fallback order for Node 0: 0 Jan 29 11:47:43.927012 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:47:43.927019 kernel: Policy zone: DMA32 Jan 29 11:47:43.927026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:47:43.927037 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:47:43.927044 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:47:43.927051 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:47:43.927058 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:47:43.927065 kernel: Dynamic Preempt: voluntary Jan 29 11:47:43.927072 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:47:43.927084 kernel: rcu: RCU event tracing is enabled. Jan 29 11:47:43.927092 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:47:43.927099 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:47:43.927108 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:47:43.927115 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:47:43.927123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:47:43.927130 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:47:43.927137 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:47:43.927144 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:47:43.927151 kernel: Console: colour VGA+ 80x25 Jan 29 11:47:43.927158 kernel: printk: console [ttyS0] enabled Jan 29 11:47:43.927165 kernel: ACPI: Core revision 20230628 Jan 29 11:47:43.927175 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:47:43.927182 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:47:43.927189 kernel: x2apic enabled Jan 29 11:47:43.927196 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:47:43.927203 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:47:43.927210 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:47:43.927217 kernel: kvm-guest: setup PV IPIs Jan 29 11:47:43.927233 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:47:43.927241 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:47:43.927248 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:47:43.927256 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:47:43.927263 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:47:43.927273 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:47:43.927280 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:47:43.927287 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:47:43.927295 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:47:43.927302 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:47:43.927314 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:47:43.927321 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:47:43.927329 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:47:43.927336 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:47:43.927344 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:47:43.927352 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:47:43.927360 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:47:43.927367 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:47:43.927377 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:47:43.927384 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:47:43.927392 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:47:43.927399 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:47:43.927407 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:47:43.927414 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:47:43.927421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:47:43.927429 kernel: landlock: Up and running. Jan 29 11:47:43.927436 kernel: SELinux: Initializing. Jan 29 11:47:43.927446 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:47:43.927453 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:47:43.927461 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:47:43.927468 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:47:43.927476 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:47:43.927483 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:47:43.927491 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:47:43.927498 kernel: ... version: 0 Jan 29 11:47:43.927508 kernel: ... bit width: 48 Jan 29 11:47:43.927515 kernel: ... generic registers: 6 Jan 29 11:47:43.927522 kernel: ... value mask: 0000ffffffffffff Jan 29 11:47:43.927530 kernel: ... max period: 00007fffffffffff Jan 29 11:47:43.927537 kernel: ... fixed-purpose events: 0 Jan 29 11:47:43.927545 kernel: ... event mask: 000000000000003f Jan 29 11:47:43.927564 kernel: signal: max sigframe size: 1776 Jan 29 11:47:43.927572 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:47:43.927579 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:47:43.927587 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:47:43.927597 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:47:43.927604 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:47:43.927611 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:47:43.927619 kernel: smpboot: Max logical packages: 1 Jan 29 11:47:43.927626 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:47:43.927634 kernel: devtmpfs: initialized Jan 29 11:47:43.927641 kernel: x86/mm: Memory block size: 128MB Jan 29 11:47:43.927648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:47:43.927656 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:47:43.927665 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:47:43.927673 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:47:43.927680 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:47:43.927688 kernel: audit: type=2000 audit(1738151263.331:1): state=initialized audit_enabled=0 res=1 Jan 29 11:47:43.927695 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:47:43.927702 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:47:43.927710 kernel: cpuidle: using governor menu Jan 29 11:47:43.927717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:47:43.927725 kernel: dca service started, version 1.12.1 Jan 29 11:47:43.927734 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:47:43.927750 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:47:43.927759 kernel: PCI: Using configuration type 1 for base access Jan 29 11:47:43.927766 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:47:43.927774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:47:43.927781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:47:43.927789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:47:43.927796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:47:43.927804 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:47:43.927814 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:47:43.927821 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:47:43.927828 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:47:43.927836 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:47:43.927843 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:47:43.927850 kernel: ACPI: Interpreter enabled Jan 29 11:47:43.927858 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:47:43.927865 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:47:43.927873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:47:43.927882 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:47:43.927890 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:47:43.927897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:47:43.928087 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:47:43.928215 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:47:43.928374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:47:43.928386 kernel: PCI host bridge to bus 0000:00 Jan 29 11:47:43.928518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:47:43.928643 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:47:43.928754 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:47:43.928862 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:47:43.928971 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:47:43.929090 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:47:43.929201 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:47:43.929341 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:47:43.929471 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:47:43.929692 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:47:43.929829 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:47:43.929949 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:47:43.930078 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:47:43.930224 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:47:43.930350 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:47:43.930470 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:47:43.930606 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:47:43.930785 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:47:43.930936 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:47:43.931103 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:47:43.931249 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:47:43.931384 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:47:43.931506 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:47:43.931643 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:47:43.931764 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:47:43.931884 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:47:43.932023 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:47:43.932150 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:47:43.932281 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:47:43.932491 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:47:43.932670 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:47:43.932812 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:47:43.932933 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:47:43.932943 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:47:43.932956 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:47:43.932963 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:47:43.932971 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:47:43.932987 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:47:43.932998 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:47:43.933007 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:47:43.933017 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:47:43.933027 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:47:43.933035 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:47:43.933045 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:47:43.933053 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:47:43.933061 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:47:43.933068 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:47:43.933076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:47:43.933083 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:47:43.933091 kernel: iommu: Default domain type: Translated Jan 29 11:47:43.933099 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:47:43.933107 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:47:43.933116 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:47:43.933124 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:47:43.933131 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:47:43.933267 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:47:43.933427 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:47:43.933609 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:47:43.933626 kernel: vgaarb: loaded Jan 29 11:47:43.933637 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:47:43.933652 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:47:43.933662 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:47:43.933670 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:47:43.933678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:47:43.933685 kernel: pnp: PnP ACPI init Jan 29 11:47:43.933820 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:47:43.933832 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:47:43.933839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:47:43.933850 kernel: NET: Registered PF_INET protocol family Jan 29 11:47:43.933858 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:47:43.933866 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:47:43.933874 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:47:43.933881 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:47:43.933889 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:47:43.933896 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:47:43.933904 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:47:43.933912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:47:43.933921 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:47:43.933929 kernel: NET: Registered PF_XDP protocol family Jan 29 11:47:43.934050 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:47:43.934159 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:47:43.934267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:47:43.934388 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:47:43.934524 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:47:43.934679 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:47:43.934699 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:47:43.934710 kernel: Initialise system trusted keyrings Jan 29 11:47:43.934720 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:47:43.934730 kernel: Key type asymmetric registered Jan 29 11:47:43.934741 kernel: Asymmetric key parser 'x509' registered Jan 29 11:47:43.934751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:47:43.934761 kernel: io scheduler mq-deadline registered Jan 29 11:47:43.934771 kernel: io scheduler kyber registered Jan 29 11:47:43.934782 kernel: io scheduler bfq registered Jan 29 11:47:43.934795 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:47:43.934807 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:47:43.934817 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:47:43.934828 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:47:43.934838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:47:43.934848 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:47:43.934859 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:47:43.934869 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:47:43.934879 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:47:43.934892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:47:43.935076 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:47:43.935220 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:47:43.935369 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:47:43 UTC (1738151263) Jan 29 11:47:43.935508 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:47:43.935522 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:47:43.935532 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:47:43.935542 kernel: Segment Routing with IPv6 Jan 29 11:47:43.935631 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:47:43.935642 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:47:43.935653 kernel: Key type dns_resolver registered Jan 29 11:47:43.935663 kernel: IPI shorthand broadcast: enabled Jan 29 11:47:43.935673 kernel: sched_clock: Marking stable (798002890, 109930169)->(965977998, -58044939) Jan 29 11:47:43.935684 kernel: registered taskstats version 1 Jan 29 11:47:43.935694 kernel: Loading compiled-in X.509 certificates Jan 29 11:47:43.935704 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:47:43.935714 kernel: Key type .fscrypt registered Jan 29 11:47:43.935728 kernel: Key type fscrypt-provisioning registered Jan 29 11:47:43.935738 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:47:43.935748 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:47:43.935757 kernel: ima: No architecture policies found Jan 29 11:47:43.935764 kernel: clk: Disabling unused clocks Jan 29 11:47:43.935772 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:47:43.935779 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:47:43.935787 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:47:43.935794 kernel: Run /init as init process Jan 29 11:47:43.935804 kernel: with arguments: Jan 29 11:47:43.935812 kernel: /init Jan 29 11:47:43.935819 kernel: with environment: Jan 29 11:47:43.935826 kernel: HOME=/ Jan 29 11:47:43.935834 kernel: TERM=linux Jan 29 11:47:43.935841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:47:43.935851 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:47:43.935860 systemd[1]: Detected virtualization kvm. Jan 29 11:47:43.935871 systemd[1]: Detected architecture x86-64. Jan 29 11:47:43.935878 systemd[1]: Running in initrd. Jan 29 11:47:43.935886 systemd[1]: No hostname configured, using default hostname. Jan 29 11:47:43.935894 systemd[1]: Hostname set to . Jan 29 11:47:43.935902 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:47:43.935910 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:47:43.935918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:43.935926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:43.935937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:47:43.935956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:47:43.935967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:47:43.935976 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:47:43.935994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:47:43.936005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:47:43.936013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:43.936022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:43.936030 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:47:43.936038 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:47:43.936046 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:47:43.936054 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:47:43.936062 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:47:43.936073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:47:43.936081 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:47:43.936089 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:47:43.936097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:43.936106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:43.936114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:43.936122 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:47:43.936130 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:47:43.936138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:47:43.936149 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:47:43.936157 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:47:43.936165 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:47:43.936173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:47:43.936182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:43.936192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:47:43.936200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:43.936209 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:47:43.936239 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:47:43.936260 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:47:43.936271 systemd-journald[193]: Journal started Jan 29 11:47:43.936292 systemd-journald[193]: Runtime Journal (/run/log/journal/c558c25312214f25b8fa27d25724dd39) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:47:43.931219 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:47:43.965736 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:47:43.965767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:47:43.965784 kernel: Bridge firewalling registered Jan 29 11:47:43.958430 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:47:43.966044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:43.967088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:43.979754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:43.980783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:47:43.982716 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:47:43.992252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:43.993401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:43.997235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:44.008736 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:47:44.011457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:47:44.011884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:44.015906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:47:44.022930 dracut-cmdline[222]: dracut-dracut-053 Jan 29 11:47:44.024317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:44.026865 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:47:44.067086 systemd-resolved[229]: Positive Trust Anchors: Jan 29 11:47:44.067105 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:47:44.067143 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:47:44.070922 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 29 11:47:44.072388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:47:44.075208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:44.117592 kernel: SCSI subsystem initialized Jan 29 11:47:44.129578 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:47:44.142577 kernel: iscsi: registered transport (tcp) Jan 29 11:47:44.168586 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:47:44.168635 kernel: QLogic iSCSI HBA Driver Jan 29 11:47:44.215882 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:47:44.226686 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:47:44.252582 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:47:44.252618 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:47:44.253574 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:47:44.295588 kernel: raid6: avx2x4 gen() 30290 MB/s Jan 29 11:47:44.312582 kernel: raid6: avx2x2 gen() 30812 MB/s Jan 29 11:47:44.329668 kernel: raid6: avx2x1 gen() 25807 MB/s Jan 29 11:47:44.329695 kernel: raid6: using algorithm avx2x2 gen() 30812 MB/s Jan 29 11:47:44.347683 kernel: raid6: .... xor() 19745 MB/s, rmw enabled Jan 29 11:47:44.347716 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:47:44.372582 kernel: xor: automatically using best checksumming function avx Jan 29 11:47:44.520586 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:47:44.532786 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:47:44.545713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:44.556979 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 29 11:47:44.561478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:44.588718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:47:44.600412 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 29 11:47:44.631253 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:47:44.643681 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:47:44.704467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:44.710748 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:47:44.728746 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:47:44.741909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:47:44.745452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:44.748468 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:47:44.757947 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:47:44.788035 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:47:44.788057 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:47:44.788215 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:47:44.788229 kernel: AES CTR mode by8 optimization enabled Jan 29 11:47:44.788247 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:47:44.788260 kernel: GPT:9289727 != 19775487 Jan 29 11:47:44.788272 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:47:44.788283 kernel: GPT:9289727 != 19775487 Jan 29 11:47:44.788295 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:47:44.788307 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:44.757762 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:47:44.768891 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:47:44.780804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:47:44.780923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:44.782548 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:44.787339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:47:44.787659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:44.805390 kernel: libata version 3.00 loaded. Jan 29 11:47:44.789134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:44.808129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:44.822406 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:47:44.885329 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:47:44.885351 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:47:44.885500 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:47:44.885653 kernel: scsi host0: ahci Jan 29 11:47:44.885799 kernel: scsi host1: ahci Jan 29 11:47:44.885946 kernel: scsi host2: ahci Jan 29 11:47:44.886097 kernel: scsi host3: ahci Jan 29 11:47:44.886240 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (471) Jan 29 11:47:44.886255 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Jan 29 11:47:44.886265 kernel: scsi host4: ahci Jan 29 11:47:44.886405 kernel: scsi host5: ahci Jan 29 11:47:44.886546 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:47:44.886616 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:47:44.886627 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:47:44.886636 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:47:44.886650 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:47:44.886660 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:47:44.884216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:47:44.917712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:44.936834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:47:44.944051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:47:44.950312 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:47:44.953944 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:47:44.968684 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:47:44.987234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:45.006875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:45.215156 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:47:45.215226 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:47:45.215239 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:47:45.215252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:47:45.216578 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:47:45.217579 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:47:45.217597 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:47:45.218576 kernel: ata3.00: applying bridge limits Jan 29 11:47:45.219580 kernel: ata3.00: configured for UDMA/100 Jan 29 11:47:45.219606 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:47:45.268582 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:47:45.282288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:47:45.282307 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:47:45.286458 disk-uuid[567]: Primary Header is updated. Jan 29 11:47:45.286458 disk-uuid[567]: Secondary Entries is updated. Jan 29 11:47:45.286458 disk-uuid[567]: Secondary Header is updated. Jan 29 11:47:45.290581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:45.295578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:46.390584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:46.390784 disk-uuid[579]: The operation has completed successfully. Jan 29 11:47:46.420499 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:47:46.420660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:47:46.490776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:47:46.494650 sh[594]: Success Jan 29 11:47:46.526595 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:47:46.560198 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:47:46.574500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:47:46.576540 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:47:46.598813 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:47:46.598873 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:47:46.598887 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:47:46.599889 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:47:46.601570 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:47:46.606539 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:47:46.608319 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:47:46.617759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:47:46.639799 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:47:46.658686 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:47:46.658712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:47:46.658723 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:46.662617 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:46.672949 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:47:46.675011 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:47:46.765923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:47:46.800744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:47:46.822364 systemd-networkd[772]: lo: Link UP Jan 29 11:47:46.822375 systemd-networkd[772]: lo: Gained carrier Jan 29 11:47:46.824007 systemd-networkd[772]: Enumeration completed Jan 29 11:47:46.824112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:47:46.824394 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:46.824398 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:47:46.843340 systemd-networkd[772]: eth0: Link UP Jan 29 11:47:46.843344 systemd-networkd[772]: eth0: Gained carrier Jan 29 11:47:46.843351 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:46.845375 systemd[1]: Reached target network.target - Network. Jan 29 11:47:46.866621 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:47:46.911413 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:47:46.925951 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:47:47.111654 ignition[777]: Ignition 2.19.0 Jan 29 11:47:47.111667 ignition[777]: Stage: fetch-offline Jan 29 11:47:47.111711 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:47.111723 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:47.111841 ignition[777]: parsed url from cmdline: "" Jan 29 11:47:47.111845 ignition[777]: no config URL provided Jan 29 11:47:47.111850 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:47:47.111859 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:47:47.111890 ignition[777]: op(1): [started] loading QEMU firmware config module Jan 29 11:47:47.111895 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:47:47.122893 ignition[777]: op(1): [finished] loading QEMU firmware config module Jan 29 11:47:47.164138 ignition[777]: parsing config with SHA512: 308e3d754ad3754be2893d5d1a671e46be08e527577c57585cbcdc069506457aa40a5a9315affc903910b046fdc33255d6b732adc6aec463f7ffdcbe93423d7b Jan 29 11:47:47.175377 unknown[777]: fetched base config from "system" Jan 29 11:47:47.175395 unknown[777]: fetched user config from "qemu" Jan 29 11:47:47.179811 ignition[777]: fetch-offline: fetch-offline passed Jan 29 11:47:47.179975 ignition[777]: Ignition finished successfully Jan 29 11:47:47.183498 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:47:47.183890 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:47:47.196534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:47:47.216590 ignition[786]: Ignition 2.19.0 Jan 29 11:47:47.216603 ignition[786]: Stage: kargs Jan 29 11:47:47.216765 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:47.216776 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:47.217673 ignition[786]: kargs: kargs passed Jan 29 11:47:47.217719 ignition[786]: Ignition finished successfully Jan 29 11:47:47.227234 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:47:47.240895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:47:47.255179 ignition[794]: Ignition 2.19.0 Jan 29 11:47:47.255193 ignition[794]: Stage: disks Jan 29 11:47:47.255422 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:47.255437 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:47.256492 ignition[794]: disks: disks passed Jan 29 11:47:47.259119 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:47:47.256590 ignition[794]: Ignition finished successfully Jan 29 11:47:47.260520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:47:47.262290 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:47:47.264821 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:47:47.266019 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:47:47.267812 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:47:47.280007 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:47:47.319072 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:47:48.070546 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:47:48.076666 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:47:48.187604 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:47:48.188514 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:47:48.189263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:47:48.223749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:47:48.230083 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:47:48.230478 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:47:48.230529 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:47:48.230575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:47:48.243584 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 29 11:47:48.243628 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:47:48.246145 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:47:48.246174 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:48.248433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:47:48.250272 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:48.250232 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:47:48.282687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:47:48.294482 systemd-networkd[772]: eth0: Gained IPv6LL Jan 29 11:47:48.320565 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:47:48.336336 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:47:48.341119 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:47:48.345176 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:47:48.470118 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:47:48.481652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:47:48.483457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:47:48.489316 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:47:48.490635 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:47:48.520144 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:47:48.531128 ignition[926]: INFO : Ignition 2.19.0 Jan 29 11:47:48.531128 ignition[926]: INFO : Stage: mount Jan 29 11:47:48.542339 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:48.542339 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:48.542339 ignition[926]: INFO : mount: mount passed Jan 29 11:47:48.542339 ignition[926]: INFO : Ignition finished successfully Jan 29 11:47:48.548030 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:47:48.558648 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:47:49.198919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:47:49.209589 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 29 11:47:49.209647 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:47:49.212102 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:47:49.212129 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:49.215586 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:49.217397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:47:49.247169 ignition[956]: INFO : Ignition 2.19.0 Jan 29 11:47:49.247169 ignition[956]: INFO : Stage: files Jan 29 11:47:49.249321 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:49.249321 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:49.249321 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:47:49.253321 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:47:49.253321 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:47:49.253321 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:47:49.257933 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:47:49.259581 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:47:49.259581 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:47:49.258207 unknown[956]: wrote ssh authorized keys file for user: core Jan 29 11:47:49.265139 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:47:49.365413 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:47:49.488268 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:47:49.488268 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:47:49.493355 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:47:49.828491 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:47:50.028768 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:47:50.152884 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:47:50.582021 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:47:51.344509 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:47:51.344509 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:47:51.349288 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:47:51.351437 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:47:51.395372 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:47:51.401711 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:47:51.403785 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:47:51.403785 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:47:51.403785 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:47:51.403785 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:47:51.403785 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:47:51.403785 ignition[956]: INFO : files: files passed Jan 29 11:47:51.403785 ignition[956]: INFO : Ignition finished successfully Jan 29 11:47:51.405621 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:47:51.419974 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:47:51.425103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:47:51.428830 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:47:51.429034 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:47:51.443922 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:47:51.449135 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:51.449135 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:51.453309 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:51.456649 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:47:51.458879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:47:51.477938 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:47:51.508950 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:47:51.509112 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:47:51.510946 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:47:51.512495 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:47:51.514490 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:47:51.515680 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:47:51.535987 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:47:51.548854 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:47:51.558237 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:51.560657 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:51.563075 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:47:51.564964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:47:51.565996 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:47:51.568845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:47:51.571251 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:47:51.573282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:47:51.575704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:47:51.578099 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:47:51.580454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:47:51.582692 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:47:51.585365 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:47:51.587616 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:47:51.590225 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:47:51.591991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:47:51.593094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:47:51.595485 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:51.598338 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:51.600834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:47:51.601828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:51.604439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:47:51.605515 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:47:51.607961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:47:51.609051 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:47:51.611462 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:47:51.613263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:47:51.616607 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:51.619415 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:47:51.621302 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:47:51.623277 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:47:51.624197 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:47:51.626246 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:47:51.627183 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:47:51.629397 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:47:51.630674 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:47:51.633243 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:47:51.634274 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:47:51.647874 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:47:51.656137 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:47:51.658217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:47:51.659617 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:51.662546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:47:51.663916 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:47:51.664270 ignition[1010]: INFO : Ignition 2.19.0 Jan 29 11:47:51.664270 ignition[1010]: INFO : Stage: umount Jan 29 11:47:51.676384 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:51.676384 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:51.676384 ignition[1010]: INFO : umount: umount passed Jan 29 11:47:51.676384 ignition[1010]: INFO : Ignition finished successfully Jan 29 11:47:51.679092 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:47:51.681286 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:47:51.687256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:47:51.693868 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:47:51.698216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:47:51.700531 systemd[1]: Stopped target network.target - Network. Jan 29 11:47:51.702685 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:47:51.702775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:47:51.706214 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:47:51.706284 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:47:51.709621 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:47:51.710834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:47:51.713211 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:47:51.714385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:47:51.717076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:47:51.719842 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:47:51.726326 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:47:51.727463 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:47:51.728233 systemd-networkd[772]: eth0: DHCPv6 lease lost Jan 29 11:47:51.732115 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:47:51.733457 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:47:51.737693 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:47:51.739010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:47:51.759028 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:47:51.759111 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:51.761818 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:47:51.761891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:47:51.783119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:47:51.783263 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:47:51.783355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:47:51.801271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:47:51.801378 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:51.803665 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:47:51.803731 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:51.806308 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:47:51.806370 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:51.808905 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:51.821781 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:47:51.822922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:47:51.831638 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:47:51.832853 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:51.836080 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:47:51.836141 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:51.839444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:47:51.839499 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:51.842573 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:47:51.842632 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:47:51.844782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:47:51.845816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:47:51.848849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:47:51.848903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:51.863986 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:47:51.871246 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:47:51.871386 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:51.874901 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:47:51.874992 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:51.878911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:47:51.878993 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:51.881152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:47:51.881217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:51.883067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:47:51.883211 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:47:51.886927 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:47:51.895800 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:47:51.919530 systemd[1]: Switching root. Jan 29 11:47:51.956046 systemd-journald[193]: Journal stopped Jan 29 11:47:54.372840 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:47:54.372914 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:47:54.372933 kernel: SELinux: policy capability open_perms=1 Jan 29 11:47:54.372948 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:47:54.372968 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:47:54.372989 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:47:54.373005 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:47:54.373019 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:47:54.373038 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:47:54.373053 kernel: audit: type=1403 audit(1738151273.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:47:54.373079 systemd[1]: Successfully loaded SELinux policy in 50.631ms. Jan 29 11:47:54.373109 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.215ms. Jan 29 11:47:54.373128 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:47:54.373145 systemd[1]: Detected virtualization kvm. Jan 29 11:47:54.373161 systemd[1]: Detected architecture x86-64. Jan 29 11:47:54.373176 systemd[1]: Detected first boot. Jan 29 11:47:54.373192 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:47:54.373211 zram_generator::config[1053]: No configuration found. Jan 29 11:47:54.373232 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:47:54.373247 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:47:54.373263 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:47:54.373279 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:47:54.373295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:47:54.373311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:47:54.373326 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:47:54.373345 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:47:54.373360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:47:54.373376 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:47:54.373392 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:47:54.373407 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:47:54.373423 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:54.373439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:54.373455 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:47:54.373472 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:47:54.373491 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:47:54.373506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:47:54.373522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:47:54.373538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:54.373571 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:47:54.373590 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:47:54.373606 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:47:54.373625 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:47:54.373642 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:54.373662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:47:54.373679 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:47:54.373694 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:47:54.373710 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:47:54.373725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:47:54.373751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:54.373767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:54.373782 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:54.373801 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:47:54.373817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:47:54.373833 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:47:54.373849 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:47:54.373871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:54.373886 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:47:54.373903 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:47:54.373919 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:47:54.373938 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:47:54.373956 systemd[1]: Reached target machines.target - Containers. Jan 29 11:47:54.373972 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:47:54.373988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:54.374004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:47:54.374020 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:47:54.374036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:54.374052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:47:54.374071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:54.374087 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:47:54.374103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:54.374120 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:47:54.374137 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:47:54.374153 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:47:54.374169 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:47:54.374185 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:47:54.374201 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:47:54.374220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:47:54.374237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:47:54.374253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:47:54.374268 kernel: loop: module loaded Jan 29 11:47:54.374305 systemd-journald[1116]: Collecting audit messages is disabled. Jan 29 11:47:54.374336 kernel: fuse: init (API version 7.39) Jan 29 11:47:54.374352 systemd-journald[1116]: Journal started Jan 29 11:47:54.374385 systemd-journald[1116]: Runtime Journal (/run/log/journal/c558c25312214f25b8fa27d25724dd39) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:47:54.091233 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:47:54.107856 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:47:54.109014 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:47:54.379719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:47:54.382066 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:47:54.382088 systemd[1]: Stopped verity-setup.service. Jan 29 11:47:54.385728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:54.389812 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:47:54.390859 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:47:54.392498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:47:54.393753 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:47:54.395076 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:47:54.396339 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:47:54.397743 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:47:54.422354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:54.424013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:47:54.424195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:47:54.425789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:54.425965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:54.427434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:54.427626 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:54.429190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:47:54.429362 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:47:54.430798 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:54.430967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:54.432398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:54.433849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:47:54.435420 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:47:54.440581 kernel: ACPI: bus type drm_connector registered Jan 29 11:47:54.451852 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:47:54.452062 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:47:54.487722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:54.490990 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:47:54.499657 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:47:54.519897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:47:54.521048 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:47:54.521080 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:47:54.523077 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:47:54.525373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:47:54.527575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:47:54.528724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:54.530140 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:47:54.533713 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:47:54.535064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:47:54.537710 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:47:54.539735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:47:54.543958 systemd-journald[1116]: Time spent on flushing to /var/log/journal/c558c25312214f25b8fa27d25724dd39 is 13.041ms for 953 entries. Jan 29 11:47:54.543958 systemd-journald[1116]: System Journal (/var/log/journal/c558c25312214f25b8fa27d25724dd39) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:47:54.992266 systemd-journald[1116]: Received client request to flush runtime journal. Jan 29 11:47:54.992339 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 11:47:54.992372 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:47:54.992395 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 11:47:54.992415 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 11:47:54.992441 kernel: loop3: detected capacity change from 0 to 140768 Jan 29 11:47:54.992464 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:47:54.992490 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 11:47:54.992513 zram_generator::config[1211]: No configuration found. Jan 29 11:47:54.552817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:47:54.556635 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:47:54.559689 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:47:54.562126 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:47:54.565054 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:47:54.566645 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:47:54.568216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:47:54.581990 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:47:54.618121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:54.620293 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 29 11:47:54.620306 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 29 11:47:54.627814 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:54.776935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:47:54.778743 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:47:54.789122 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:47:54.844945 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:47:54.845565 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 29 11:47:54.849612 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:47:54.849627 systemd[1]: Reloading... Jan 29 11:47:55.030232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:47:55.050668 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:47:55.083151 systemd[1]: Reloading finished in 233 ms. Jan 29 11:47:55.118834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:47:55.120679 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:47:55.122441 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:47:55.139757 systemd[1]: Starting ensure-sysext.service... Jan 29 11:47:55.142142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:47:55.148134 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:47:55.148149 systemd[1]: Reloading... Jan 29 11:47:55.205594 zram_generator::config[1279]: No configuration found. Jan 29 11:47:55.312732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:47:55.362843 systemd[1]: Reloading finished in 214 ms. Jan 29 11:47:55.384839 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:47:55.396176 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:47:55.426154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:47:55.428901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:47:55.432069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.432297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:55.437568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:55.442775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:55.445881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:55.447370 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 29 11:47:55.447384 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 29 11:47:55.447813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:55.447934 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.448841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:55.449024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:55.450856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:55.451040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:55.454498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:55.456299 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:55.456482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:55.461388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.461629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:55.476782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:55.484393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:55.486766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:55.488027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:55.488141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.488979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:55.489147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:55.493514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:55.493774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:55.507204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.507470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:55.508833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:55.510830 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:47:55.511943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:55.512064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:47:55.512208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:47:55.512989 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:55.513170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:55.517310 systemd[1]: Finished ensure-sysext.service. Jan 29 11:47:55.518713 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:47:55.518926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:47:55.521549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:55.521770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:55.521907 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:47:55.522223 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:47:55.523202 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:47:55.523498 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 29 11:47:55.523598 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 29 11:47:55.524512 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:47:55.525313 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:47:55.527161 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:47:55.527175 systemd-tmpfiles[1319]: Skipping /boot Jan 29 11:47:55.530061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:47:55.537951 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:47:55.537966 systemd-tmpfiles[1319]: Skipping /boot Jan 29 11:47:55.562524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:55.573750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:47:55.576686 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:47:55.578908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:47:55.582752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:47:55.587809 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:47:55.591325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:47:55.595760 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:47:55.605854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:55.609436 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:47:55.616240 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:47:55.623274 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:47:55.628893 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:47:55.633462 augenrules[1369]: No rules Jan 29 11:47:55.641168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:47:55.647403 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jan 29 11:47:55.654768 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:47:55.663213 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:47:55.665121 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:47:55.666678 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:47:55.676785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:55.682362 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:47:55.721603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1390) Jan 29 11:47:55.729134 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:47:55.774719 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:47:55.777070 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:47:55.778679 systemd-networkd[1386]: lo: Link UP Jan 29 11:47:55.778692 systemd-networkd[1386]: lo: Gained carrier Jan 29 11:47:55.782718 systemd-networkd[1386]: Enumeration completed Jan 29 11:47:55.782815 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:47:55.783604 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:55.783615 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:47:55.788634 systemd-resolved[1347]: Positive Trust Anchors: Jan 29 11:47:55.788653 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:47:55.788686 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:47:55.788818 systemd-networkd[1386]: eth0: Link UP Jan 29 11:47:55.788824 systemd-networkd[1386]: eth0: Gained carrier Jan 29 11:47:55.788877 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:55.792686 systemd-resolved[1347]: Defaulting to hostname 'linux'. Jan 29 11:47:55.795828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:47:55.796464 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:55.797314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:47:55.805959 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:47:55.806856 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 29 11:47:56.478941 systemd-resolved[1347]: Clock change detected. Flushing caches. Jan 29 11:47:56.479049 systemd-timesyncd[1348]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:47:56.479124 systemd-timesyncd[1348]: Initial clock synchronization to Wed 2025-01-29 11:47:56.478901 UTC. Jan 29 11:47:56.482468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:47:56.482877 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:47:56.486732 systemd[1]: Reached target network.target - Network. Jan 29 11:47:56.487886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:56.493859 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:47:56.498176 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:47:56.503901 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:47:56.508400 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:47:56.511377 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:47:56.511600 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:47:56.519697 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:47:56.610891 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:47:56.611144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:56.623265 kernel: kvm_amd: TSC scaling supported Jan 29 11:47:56.623314 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:47:56.623327 kernel: kvm_amd: Nested Paging enabled Jan 29 11:47:56.623356 kernel: kvm_amd: LBR virtualization supported Jan 29 11:47:56.624321 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:47:56.624342 kernel: kvm_amd: Virtual GIF supported Jan 29 11:47:56.648192 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:47:56.686297 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:47:56.714105 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:47:56.715819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:56.723490 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:47:56.758128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:47:56.759732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:56.760884 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:47:56.762166 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:47:56.763513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:47:56.765072 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:47:56.766301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:47:56.767569 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:47:56.768905 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:47:56.768934 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:47:56.769888 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:47:56.771418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:47:56.774115 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:47:56.793381 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:47:56.795871 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:47:56.797651 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:47:56.798819 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:47:56.799903 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:47:56.800994 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:47:56.801022 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:47:56.802143 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:47:56.804365 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:47:56.808858 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:47:56.809310 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:47:56.813271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:47:56.815451 jq[1434]: false Jan 29 11:47:56.815791 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:47:56.818051 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:47:56.823891 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:47:56.828101 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:47:56.843413 extend-filesystems[1435]: Found loop3 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found loop4 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found loop5 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found sr0 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda1 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda2 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda3 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found usr Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda4 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda6 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda7 Jan 29 11:47:56.843413 extend-filesystems[1435]: Found vda9 Jan 29 11:47:56.843413 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 29 11:47:56.869753 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 29 11:47:56.856241 dbus-daemon[1433]: [system] SELinux support is enabled Jan 29 11:47:56.847139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:47:56.871383 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:47:56.862213 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:47:56.863856 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:47:56.864476 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:47:56.870828 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:47:56.891364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) Jan 29 11:47:56.891280 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:47:56.893509 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:47:56.900950 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:47:56.900881 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:47:56.901630 jq[1454]: true Jan 29 11:47:56.907511 update_engine[1451]: I20250129 11:47:56.907449 1451 main.cc:92] Flatcar Update Engine starting Jan 29 11:47:56.908892 update_engine[1451]: I20250129 11:47:56.908865 1451 update_check_scheduler.cc:74] Next update check in 4m37s Jan 29 11:47:56.916291 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:47:56.916518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:47:56.916875 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:47:56.917096 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:47:56.919136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:47:56.919334 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:47:56.926523 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:47:56.930766 jq[1460]: true Jan 29 11:47:56.943452 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:47:56.947060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:47:56.947098 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:47:56.948582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:47:56.948599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:47:56.962359 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:47:57.047538 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:47:57.056279 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:47:57.056305 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:47:57.059666 systemd-logind[1448]: New seat seat0. Jan 29 11:47:57.060875 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:47:57.066898 tar[1459]: linux-amd64/helm Jan 29 11:47:57.078868 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:47:57.103451 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:47:57.103451 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:47:57.103451 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:47:57.109265 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 29 11:47:57.106184 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:47:57.106512 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:47:57.110826 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:47:57.111755 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:47:57.115159 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:47:57.269787 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:47:57.372133 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:47:57.430290 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:47:57.438424 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:47:57.438693 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:47:57.442599 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:47:57.489609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:47:57.501188 containerd[1461]: time="2025-01-29T11:47:57.500778943Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:47:57.509452 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:47:57.512902 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:47:57.515107 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:47:57.531176 containerd[1461]: time="2025-01-29T11:47:57.531097380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533091 containerd[1461]: time="2025-01-29T11:47:57.533049190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533091 containerd[1461]: time="2025-01-29T11:47:57.533083003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:47:57.533186 containerd[1461]: time="2025-01-29T11:47:57.533101338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:47:57.533363 containerd[1461]: time="2025-01-29T11:47:57.533335256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:47:57.533411 containerd[1461]: time="2025-01-29T11:47:57.533383988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533507 containerd[1461]: time="2025-01-29T11:47:57.533484115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533541 containerd[1461]: time="2025-01-29T11:47:57.533505485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533775 containerd[1461]: time="2025-01-29T11:47:57.533750745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533817 containerd[1461]: time="2025-01-29T11:47:57.533795830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533876 containerd[1461]: time="2025-01-29T11:47:57.533815036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:57.533876 containerd[1461]: time="2025-01-29T11:47:57.533830054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.534016 containerd[1461]: time="2025-01-29T11:47:57.533994753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.534323 containerd[1461]: time="2025-01-29T11:47:57.534294385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:57.534526 containerd[1461]: time="2025-01-29T11:47:57.534494310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:57.534526 containerd[1461]: time="2025-01-29T11:47:57.534516140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:47:57.534692 containerd[1461]: time="2025-01-29T11:47:57.534665020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:47:57.534766 containerd[1461]: time="2025-01-29T11:47:57.534746763Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:47:57.541798 containerd[1461]: time="2025-01-29T11:47:57.541710640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:47:57.541798 containerd[1461]: time="2025-01-29T11:47:57.541766555Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:47:57.541798 containerd[1461]: time="2025-01-29T11:47:57.541785571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:47:57.541940 containerd[1461]: time="2025-01-29T11:47:57.541804366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:47:57.541940 containerd[1461]: time="2025-01-29T11:47:57.541846375Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:47:57.542102 containerd[1461]: time="2025-01-29T11:47:57.542066828Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:47:57.542408 containerd[1461]: time="2025-01-29T11:47:57.542381618Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:47:57.542572 containerd[1461]: time="2025-01-29T11:47:57.542547319Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:47:57.542613 containerd[1461]: time="2025-01-29T11:47:57.542570743Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:47:57.542613 containerd[1461]: time="2025-01-29T11:47:57.542588476Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:47:57.542661 containerd[1461]: time="2025-01-29T11:47:57.542609145Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542661 containerd[1461]: time="2025-01-29T11:47:57.542628932Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542661 containerd[1461]: time="2025-01-29T11:47:57.542647066Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542728 containerd[1461]: time="2025-01-29T11:47:57.542666112Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542728 containerd[1461]: time="2025-01-29T11:47:57.542685728Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542728 containerd[1461]: time="2025-01-29T11:47:57.542705105Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542728 containerd[1461]: time="2025-01-29T11:47:57.542722888Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542807 containerd[1461]: time="2025-01-29T11:47:57.542738648Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:47:57.542807 containerd[1461]: time="2025-01-29T11:47:57.542762883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542807 containerd[1461]: time="2025-01-29T11:47:57.542779424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542807 containerd[1461]: time="2025-01-29T11:47:57.542794823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542809881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542825030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542859033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542875624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542890893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.542927 containerd[1461]: time="2025-01-29T11:47:57.542906603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.542932191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.542948812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.542975612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.542997232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.543016939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.543040393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543066 containerd[1461]: time="2025-01-29T11:47:57.543054620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543068536Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543130583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543151071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543165428Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543181488Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:47:57.543198 containerd[1461]: time="2025-01-29T11:47:57.543195234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.543341 containerd[1461]: time="2025-01-29T11:47:57.543211494Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:47:57.543341 containerd[1461]: time="2025-01-29T11:47:57.543229328Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:47:57.543341 containerd[1461]: time="2025-01-29T11:47:57.543243083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:47:57.544436 containerd[1461]: time="2025-01-29T11:47:57.543902390Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:47:57.544436 containerd[1461]: time="2025-01-29T11:47:57.544030550Z" level=info msg="Connect containerd service" Jan 29 11:47:57.544436 containerd[1461]: time="2025-01-29T11:47:57.544077969Z" level=info msg="using legacy CRI server" Jan 29 11:47:57.544436 containerd[1461]: time="2025-01-29T11:47:57.544087336Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:47:57.544436 containerd[1461]: time="2025-01-29T11:47:57.544244191Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:47:57.549102 containerd[1461]: time="2025-01-29T11:47:57.548969400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:47:57.549204 containerd[1461]: time="2025-01-29T11:47:57.549165037Z" level=info msg="Start subscribing containerd event" Jan 29 11:47:57.549259 containerd[1461]: time="2025-01-29T11:47:57.549223036Z" level=info msg="Start recovering state" Jan 29 11:47:57.549417 containerd[1461]: time="2025-01-29T11:47:57.549389969Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:47:57.549453 containerd[1461]: time="2025-01-29T11:47:57.549392323Z" level=info msg="Start event monitor" Jan 29 11:47:57.549453 containerd[1461]: time="2025-01-29T11:47:57.549444060Z" level=info msg="Start snapshots syncer" Jan 29 11:47:57.549518 containerd[1461]: time="2025-01-29T11:47:57.549456063Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:47:57.549518 containerd[1461]: time="2025-01-29T11:47:57.549465551Z" level=info msg="Start streaming server" Jan 29 11:47:57.549887 containerd[1461]: time="2025-01-29T11:47:57.549469628Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:47:57.549887 containerd[1461]: time="2025-01-29T11:47:57.549667870Z" level=info msg="containerd successfully booted in 0.051591s" Jan 29 11:47:57.549763 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:47:57.743199 tar[1459]: linux-amd64/LICENSE Jan 29 11:47:57.743313 tar[1459]: linux-amd64/README.md Jan 29 11:47:57.759153 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:47:58.435096 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 29 11:47:58.439166 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:47:58.441197 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:47:58.456244 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:47:58.459551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:47:58.462271 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:47:58.480153 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:47:58.480421 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:47:58.483199 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:47:58.489080 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:47:59.697152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:47:59.698849 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:47:59.701117 systemd[1]: Startup finished in 951ms (kernel) + 9.737s (initrd) + 5.614s (userspace) = 16.303s. Jan 29 11:47:59.721292 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:48:00.112994 kubelet[1546]: E0129 11:48:00.112856 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:48:00.116805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:48:00.117201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:48:00.117561 systemd[1]: kubelet.service: Consumed 1.451s CPU time. Jan 29 11:48:00.567592 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:48:00.569065 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Jan 29 11:48:00.612712 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:00.615239 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:00.623457 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:48:00.632088 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:48:00.633597 systemd-logind[1448]: New session 1 of user core. Jan 29 11:48:00.646660 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:48:00.649464 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:48:00.658008 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:48:00.765169 systemd[1563]: Queued start job for default target default.target. Jan 29 11:48:00.774117 systemd[1563]: Created slice app.slice - User Application Slice. Jan 29 11:48:00.774142 systemd[1563]: Reached target paths.target - Paths. Jan 29 11:48:00.774156 systemd[1563]: Reached target timers.target - Timers. Jan 29 11:48:00.775686 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:48:00.788433 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:48:00.788596 systemd[1563]: Reached target sockets.target - Sockets. Jan 29 11:48:00.788615 systemd[1563]: Reached target basic.target - Basic System. Jan 29 11:48:00.788662 systemd[1563]: Reached target default.target - Main User Target. Jan 29 11:48:00.788703 systemd[1563]: Startup finished in 124ms. Jan 29 11:48:00.789777 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:48:00.791910 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:48:00.853723 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Jan 29 11:48:00.891472 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:00.893245 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:00.897976 systemd-logind[1448]: New session 2 of user core. Jan 29 11:48:00.904987 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:48:00.958817 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:00.969692 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:33514.service: Deactivated successfully. Jan 29 11:48:00.971501 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:48:00.972972 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:48:00.983087 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:33516.service - OpenSSH per-connection server daemon (10.0.0.1:33516). Jan 29 11:48:00.984056 systemd-logind[1448]: Removed session 2. Jan 29 11:48:01.010892 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 33516 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:01.012332 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:01.016159 systemd-logind[1448]: New session 3 of user core. Jan 29 11:48:01.029964 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:48:01.081374 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:01.093517 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:33516.service: Deactivated successfully. Jan 29 11:48:01.095084 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:48:01.096432 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:48:01.097641 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:51764.service - OpenSSH per-connection server daemon (10.0.0.1:51764). Jan 29 11:48:01.098546 systemd-logind[1448]: Removed session 3. Jan 29 11:48:01.144962 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 51764 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:01.147012 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:01.151600 systemd-logind[1448]: New session 4 of user core. Jan 29 11:48:01.159980 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:48:01.215738 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:01.227677 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:51764.service: Deactivated successfully. Jan 29 11:48:01.229273 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:48:01.231022 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:48:01.243131 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:51772.service - OpenSSH per-connection server daemon (10.0.0.1:51772). Jan 29 11:48:01.244188 systemd-logind[1448]: Removed session 4. Jan 29 11:48:01.271470 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 51772 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:01.273040 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:01.277504 systemd-logind[1448]: New session 5 of user core. Jan 29 11:48:01.291081 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:48:01.351065 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:48:01.351517 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:48:01.377107 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:01.380413 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:01.392727 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:51772.service: Deactivated successfully. Jan 29 11:48:01.395289 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:48:01.397380 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:48:01.411306 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:51778.service - OpenSSH per-connection server daemon (10.0.0.1:51778). Jan 29 11:48:01.412483 systemd-logind[1448]: Removed session 5. Jan 29 11:48:01.441161 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 51778 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:01.442800 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:01.447133 systemd-logind[1448]: New session 6 of user core. Jan 29 11:48:01.457027 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:48:01.514061 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:48:01.514516 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:48:01.518527 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:01.525061 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:48:01.525404 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:48:01.548134 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:48:01.549984 auditctl[1610]: No rules Jan 29 11:48:01.551465 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:48:01.551734 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:48:01.553626 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:48:01.588275 augenrules[1628]: No rules Jan 29 11:48:01.590944 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:48:01.592414 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:01.594263 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:01.611873 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:51778.service: Deactivated successfully. Jan 29 11:48:01.613541 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:48:01.615282 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:48:01.624263 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:51780.service - OpenSSH per-connection server daemon (10.0.0.1:51780). Jan 29 11:48:01.625267 systemd-logind[1448]: Removed session 6. Jan 29 11:48:01.652408 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 51780 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:48:01.654202 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:01.658240 systemd-logind[1448]: New session 7 of user core. Jan 29 11:48:01.667953 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:48:01.722527 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:48:01.722958 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:48:02.019065 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:48:02.019272 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:48:02.543292 dockerd[1657]: time="2025-01-29T11:48:02.543200973Z" level=info msg="Starting up" Jan 29 11:48:03.096525 dockerd[1657]: time="2025-01-29T11:48:03.096453842Z" level=info msg="Loading containers: start." Jan 29 11:48:03.237986 kernel: Initializing XFRM netlink socket Jan 29 11:48:03.330703 systemd-networkd[1386]: docker0: Link UP Jan 29 11:48:03.359058 dockerd[1657]: time="2025-01-29T11:48:03.358911530Z" level=info msg="Loading containers: done." Jan 29 11:48:03.378067 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2749974640-merged.mount: Deactivated successfully. Jan 29 11:48:03.380379 dockerd[1657]: time="2025-01-29T11:48:03.380331643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:48:03.380470 dockerd[1657]: time="2025-01-29T11:48:03.380450646Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:48:03.380585 dockerd[1657]: time="2025-01-29T11:48:03.380560843Z" level=info msg="Daemon has completed initialization" Jan 29 11:48:03.429866 dockerd[1657]: time="2025-01-29T11:48:03.429760011Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:48:03.430102 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:48:06.212942 containerd[1461]: time="2025-01-29T11:48:06.212890069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:48:06.864948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242622035.mount: Deactivated successfully. Jan 29 11:48:08.115545 containerd[1461]: time="2025-01-29T11:48:08.115464746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:08.116221 containerd[1461]: time="2025-01-29T11:48:08.116181500Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:48:08.117308 containerd[1461]: time="2025-01-29T11:48:08.117262818Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:08.120038 containerd[1461]: time="2025-01-29T11:48:08.119999299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:08.121202 containerd[1461]: time="2025-01-29T11:48:08.121157530Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.908220492s" Jan 29 11:48:08.121202 containerd[1461]: time="2025-01-29T11:48:08.121199429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:48:08.123044 containerd[1461]: time="2025-01-29T11:48:08.122969738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:48:09.572308 containerd[1461]: time="2025-01-29T11:48:09.572218189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:09.573271 containerd[1461]: time="2025-01-29T11:48:09.573216641Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:48:09.574771 containerd[1461]: time="2025-01-29T11:48:09.574664956Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:09.578399 containerd[1461]: time="2025-01-29T11:48:09.578325360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:09.579425 containerd[1461]: time="2025-01-29T11:48:09.579389886Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.456384581s" Jan 29 11:48:09.579425 containerd[1461]: time="2025-01-29T11:48:09.579423879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:48:09.580092 containerd[1461]: time="2025-01-29T11:48:09.580066855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:48:10.367645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:48:10.380067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:10.589371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:10.595057 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:48:10.672249 kubelet[1870]: E0129 11:48:10.672070 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:48:10.678968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:48:10.679192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:48:11.965818 containerd[1461]: time="2025-01-29T11:48:11.965720655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:11.967150 containerd[1461]: time="2025-01-29T11:48:11.967058754Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:48:11.969811 containerd[1461]: time="2025-01-29T11:48:11.969765780Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:11.973449 containerd[1461]: time="2025-01-29T11:48:11.973392740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:11.974957 containerd[1461]: time="2025-01-29T11:48:11.974900036Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 2.394796784s" Jan 29 11:48:11.974957 containerd[1461]: time="2025-01-29T11:48:11.974952705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:48:11.975732 containerd[1461]: time="2025-01-29T11:48:11.975676382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:48:13.646366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760644976.mount: Deactivated successfully. Jan 29 11:48:14.319579 containerd[1461]: time="2025-01-29T11:48:14.319444386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:14.320887 containerd[1461]: time="2025-01-29T11:48:14.320702415Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:48:14.343288 containerd[1461]: time="2025-01-29T11:48:14.343212311Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:14.370036 containerd[1461]: time="2025-01-29T11:48:14.369963881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:14.370914 containerd[1461]: time="2025-01-29T11:48:14.370875780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.395143162s" Jan 29 11:48:14.370976 containerd[1461]: time="2025-01-29T11:48:14.370913261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:48:14.371581 containerd[1461]: time="2025-01-29T11:48:14.371518956Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:48:14.881714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458206380.mount: Deactivated successfully. Jan 29 11:48:16.974694 containerd[1461]: time="2025-01-29T11:48:16.974601335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:16.976204 containerd[1461]: time="2025-01-29T11:48:16.976147715Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:48:16.977522 containerd[1461]: time="2025-01-29T11:48:16.977472549Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:16.981048 containerd[1461]: time="2025-01-29T11:48:16.980997488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:16.982196 containerd[1461]: time="2025-01-29T11:48:16.982163905Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.610582081s" Jan 29 11:48:16.982240 containerd[1461]: time="2025-01-29T11:48:16.982198911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:48:16.982795 containerd[1461]: time="2025-01-29T11:48:16.982750435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:48:17.542856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675469521.mount: Deactivated successfully. Jan 29 11:48:17.549207 containerd[1461]: time="2025-01-29T11:48:17.549152328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:17.550067 containerd[1461]: time="2025-01-29T11:48:17.550015466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:48:17.551197 containerd[1461]: time="2025-01-29T11:48:17.551143652Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:17.553355 containerd[1461]: time="2025-01-29T11:48:17.553284035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:17.554105 containerd[1461]: time="2025-01-29T11:48:17.554068726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.276022ms" Jan 29 11:48:17.554105 containerd[1461]: time="2025-01-29T11:48:17.554096929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:48:17.554646 containerd[1461]: time="2025-01-29T11:48:17.554503441Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:48:18.090641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466334573.mount: Deactivated successfully. Jan 29 11:48:20.929485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:48:20.940084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:21.089789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:21.095015 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:48:21.214175 kubelet[2003]: E0129 11:48:21.214010 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:48:21.218767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:48:21.219040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:48:21.451013 containerd[1461]: time="2025-01-29T11:48:21.450947663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:21.452148 containerd[1461]: time="2025-01-29T11:48:21.452051773Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:48:21.453253 containerd[1461]: time="2025-01-29T11:48:21.453186651Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:21.456543 containerd[1461]: time="2025-01-29T11:48:21.456487671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:21.457781 containerd[1461]: time="2025-01-29T11:48:21.457744768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.903212853s" Jan 29 11:48:21.457860 containerd[1461]: time="2025-01-29T11:48:21.457783060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:48:23.717561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:23.737326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:23.765217 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit session-7.scope)... Jan 29 11:48:23.765236 systemd[1]: Reloading... Jan 29 11:48:23.863543 zram_generator::config[2081]: No configuration found. Jan 29 11:48:24.276605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:48:24.372264 systemd[1]: Reloading finished in 606 ms. Jan 29 11:48:24.430019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:48:24.430185 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:48:24.430593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:24.433048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:24.607428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:24.614371 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:48:24.695016 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:24.695016 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:48:24.695016 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:24.696579 kubelet[2127]: I0129 11:48:24.696502 2127 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:48:25.015079 kubelet[2127]: I0129 11:48:25.014933 2127 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:48:25.015079 kubelet[2127]: I0129 11:48:25.014969 2127 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:48:25.015445 kubelet[2127]: I0129 11:48:25.015414 2127 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:48:25.083424 kubelet[2127]: I0129 11:48:25.083350 2127 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:48:25.084403 kubelet[2127]: E0129 11:48:25.084354 2127 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:25.092331 kubelet[2127]: E0129 11:48:25.092287 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:48:25.092397 kubelet[2127]: I0129 11:48:25.092331 2127 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:48:25.107023 kubelet[2127]: I0129 11:48:25.106989 2127 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:48:25.119595 kubelet[2127]: I0129 11:48:25.119550 2127 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:48:25.119785 kubelet[2127]: I0129 11:48:25.119730 2127 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:48:25.120017 kubelet[2127]: I0129 11:48:25.119769 2127 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:48:25.120129 kubelet[2127]: I0129 11:48:25.120021 2127 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:48:25.120129 kubelet[2127]: I0129 11:48:25.120031 2127 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:48:25.120176 kubelet[2127]: I0129 11:48:25.120164 2127 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:25.125508 kubelet[2127]: I0129 11:48:25.125458 2127 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:48:25.125508 kubelet[2127]: I0129 11:48:25.125494 2127 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:48:25.125673 kubelet[2127]: I0129 11:48:25.125537 2127 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:48:25.125673 kubelet[2127]: I0129 11:48:25.125553 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:48:25.133370 kubelet[2127]: W0129 11:48:25.133216 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:25.133440 kubelet[2127]: E0129 11:48:25.133382 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:25.134059 kubelet[2127]: W0129 11:48:25.133507 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:25.134059 kubelet[2127]: E0129 11:48:25.133563 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:25.147646 kubelet[2127]: I0129 11:48:25.147589 2127 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:48:25.152882 kubelet[2127]: I0129 11:48:25.152854 2127 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:48:25.153772 kubelet[2127]: W0129 11:48:25.153747 2127 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:48:25.154645 kubelet[2127]: I0129 11:48:25.154619 2127 server.go:1269] "Started kubelet" Jan 29 11:48:25.155396 kubelet[2127]: I0129 11:48:25.155053 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:48:25.155518 kubelet[2127]: I0129 11:48:25.155501 2127 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:48:25.155577 kubelet[2127]: I0129 11:48:25.155561 2127 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:48:25.156186 kubelet[2127]: I0129 11:48:25.156166 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:48:25.158144 kubelet[2127]: I0129 11:48:25.156464 2127 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:48:25.158144 kubelet[2127]: I0129 11:48:25.156767 2127 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:48:25.158144 kubelet[2127]: I0129 11:48:25.156808 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:48:25.158144 kubelet[2127]: E0129 11:48:25.157101 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.158144 kubelet[2127]: I0129 11:48:25.157435 2127 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:48:25.158144 kubelet[2127]: I0129 11:48:25.157500 2127 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:48:25.158144 kubelet[2127]: E0129 11:48:25.157496 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Jan 29 11:48:25.158597 kubelet[2127]: W0129 11:48:25.158550 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:25.158717 kubelet[2127]: E0129 11:48:25.158660 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:25.159636 kubelet[2127]: I0129 11:48:25.159622 2127 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:48:25.163868 kubelet[2127]: I0129 11:48:25.159805 2127 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:48:25.163868 kubelet[2127]: E0129 11:48:25.159765 2127 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:48:25.163868 kubelet[2127]: I0129 11:48:25.159913 2127 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:48:25.185062 kubelet[2127]: E0129 11:48:25.181699 2127 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2767259ef3fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:25.154589691 +0000 UTC m=+0.529339262,LastTimestamp:2025-01-29 11:48:25.154589691 +0000 UTC m=+0.529339262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:25.195207 kubelet[2127]: I0129 11:48:25.195181 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:48:25.195207 kubelet[2127]: I0129 11:48:25.195197 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:48:25.195207 kubelet[2127]: I0129 11:48:25.195213 2127 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:25.204584 kubelet[2127]: I0129 11:48:25.204523 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:48:25.206034 kubelet[2127]: I0129 11:48:25.205986 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:48:25.206093 kubelet[2127]: I0129 11:48:25.206040 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:48:25.206093 kubelet[2127]: I0129 11:48:25.206062 2127 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:48:25.206159 kubelet[2127]: E0129 11:48:25.206108 2127 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:48:25.257927 kubelet[2127]: E0129 11:48:25.257868 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.307405 kubelet[2127]: E0129 11:48:25.307212 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:48:25.358549 kubelet[2127]: E0129 11:48:25.358470 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.358995 kubelet[2127]: E0129 11:48:25.358951 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Jan 29 11:48:25.459462 kubelet[2127]: E0129 11:48:25.459388 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.507720 kubelet[2127]: E0129 11:48:25.507636 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:48:25.560183 kubelet[2127]: E0129 11:48:25.560037 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.660756 kubelet[2127]: E0129 11:48:25.660702 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.759457 kubelet[2127]: E0129 11:48:25.759384 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Jan 29 11:48:25.761569 kubelet[2127]: E0129 11:48:25.761527 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.862085 kubelet[2127]: E0129 11:48:25.861967 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:25.908267 kubelet[2127]: E0129 11:48:25.908184 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:48:25.962862 kubelet[2127]: E0129 11:48:25.962720 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.052587 kubelet[2127]: W0129 11:48:26.052483 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:26.052587 kubelet[2127]: E0129 11:48:26.052538 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:26.063094 kubelet[2127]: E0129 11:48:26.063045 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.079230 kubelet[2127]: W0129 11:48:26.079123 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:26.079230 kubelet[2127]: E0129 11:48:26.079217 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:26.163977 kubelet[2127]: E0129 11:48:26.163769 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.264644 kubelet[2127]: E0129 11:48:26.264577 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.275188 kubelet[2127]: I0129 11:48:26.275124 2127 policy_none.go:49] "None policy: Start" Jan 29 11:48:26.276167 kubelet[2127]: I0129 11:48:26.276097 2127 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:48:26.276167 kubelet[2127]: I0129 11:48:26.276129 2127 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:48:26.339009 kubelet[2127]: W0129 11:48:26.338955 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:26.339009 kubelet[2127]: E0129 11:48:26.339012 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:26.365729 kubelet[2127]: E0129 11:48:26.365664 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.466507 kubelet[2127]: E0129 11:48:26.466326 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.552199 kubelet[2127]: W0129 11:48:26.552148 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:26.552199 kubelet[2127]: E0129 11:48:26.552198 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:26.559798 kubelet[2127]: E0129 11:48:26.559767 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Jan 29 11:48:26.567014 kubelet[2127]: E0129 11:48:26.566970 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.668082 kubelet[2127]: E0129 11:48:26.668022 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.709287 kubelet[2127]: E0129 11:48:26.709228 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:48:26.768929 kubelet[2127]: E0129 11:48:26.768734 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.875622 kubelet[2127]: E0129 11:48:26.875571 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:26.878382 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:48:26.894297 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:48:26.897768 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:48:26.918022 kubelet[2127]: I0129 11:48:26.917995 2127 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:48:26.918227 kubelet[2127]: I0129 11:48:26.918212 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:48:26.918284 kubelet[2127]: I0129 11:48:26.918228 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:48:26.918530 kubelet[2127]: I0129 11:48:26.918457 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:48:26.919436 kubelet[2127]: E0129 11:48:26.919408 2127 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:48:27.020662 kubelet[2127]: I0129 11:48:27.020531 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:27.020947 kubelet[2127]: E0129 11:48:27.020919 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jan 29 11:48:27.094110 kubelet[2127]: W0129 11:48:27.094036 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:27.094110 kubelet[2127]: E0129 11:48:27.094116 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:27.108220 kubelet[2127]: E0129 11:48:27.108179 2127 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:27.222829 kubelet[2127]: I0129 11:48:27.222781 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:27.223308 kubelet[2127]: E0129 11:48:27.223268 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jan 29 11:48:27.625174 kubelet[2127]: I0129 11:48:27.625134 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:27.625491 kubelet[2127]: E0129 11:48:27.625450 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jan 29 11:48:27.742157 kubelet[2127]: W0129 11:48:27.742027 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:27.742157 kubelet[2127]: E0129 11:48:27.742117 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:28.161289 kubelet[2127]: E0129 11:48:28.161184 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="3.2s" Jan 29 11:48:28.319199 systemd[1]: Created slice kubepods-burstable-podd94e506e23b01ee6d048e2e0618a2018.slice - libcontainer container kubepods-burstable-podd94e506e23b01ee6d048e2e0618a2018.slice. Jan 29 11:48:28.334475 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:48:28.348079 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:48:28.385426 kubelet[2127]: I0129 11:48:28.385362 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:28.385426 kubelet[2127]: I0129 11:48:28.385424 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:28.385618 kubelet[2127]: I0129 11:48:28.385444 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:28.385618 kubelet[2127]: I0129 11:48:28.385459 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:48:28.385618 kubelet[2127]: I0129 11:48:28.385480 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:28.385618 kubelet[2127]: I0129 11:48:28.385541 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:28.385618 kubelet[2127]: I0129 11:48:28.385591 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:28.385792 kubelet[2127]: I0129 11:48:28.385619 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:28.385792 kubelet[2127]: I0129 11:48:28.385642 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:28.427853 kubelet[2127]: I0129 11:48:28.427675 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:28.428057 kubelet[2127]: E0129 11:48:28.428028 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jan 29 11:48:28.633233 kubelet[2127]: E0129 11:48:28.633161 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:28.634114 containerd[1461]: time="2025-01-29T11:48:28.634058925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d94e506e23b01ee6d048e2e0618a2018,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:28.637322 kubelet[2127]: E0129 11:48:28.637300 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:28.637633 containerd[1461]: time="2025-01-29T11:48:28.637608009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:28.650938 kubelet[2127]: E0129 11:48:28.650913 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:28.651303 containerd[1461]: time="2025-01-29T11:48:28.651260929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:28.860360 kubelet[2127]: W0129 11:48:28.860180 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:28.860360 kubelet[2127]: E0129 11:48:28.860276 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:28.936874 kubelet[2127]: W0129 11:48:28.936779 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:28.936874 kubelet[2127]: E0129 11:48:28.936886 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:28.976998 kubelet[2127]: W0129 11:48:28.976921 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Jan 29 11:48:28.976998 kubelet[2127]: E0129 11:48:28.976998 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:29.547357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623712435.mount: Deactivated successfully. Jan 29 11:48:29.578436 containerd[1461]: time="2025-01-29T11:48:29.578290162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:29.582967 containerd[1461]: time="2025-01-29T11:48:29.582552597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:48:29.584472 containerd[1461]: time="2025-01-29T11:48:29.584386169Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:29.586016 containerd[1461]: time="2025-01-29T11:48:29.585962138Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:29.587468 containerd[1461]: time="2025-01-29T11:48:29.587397336Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:29.588386 containerd[1461]: time="2025-01-29T11:48:29.588322518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:48:29.589468 containerd[1461]: time="2025-01-29T11:48:29.589395514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:48:29.591679 containerd[1461]: time="2025-01-29T11:48:29.591624703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:29.593186 containerd[1461]: time="2025-01-29T11:48:29.593124256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 955.468155ms" Jan 29 11:48:29.594070 containerd[1461]: time="2025-01-29T11:48:29.594037024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 959.882429ms" Jan 29 11:48:29.597657 containerd[1461]: time="2025-01-29T11:48:29.597587745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.249461ms" Jan 29 11:48:30.029679 kubelet[2127]: I0129 11:48:30.029530 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:30.030162 kubelet[2127]: E0129 11:48:30.029869 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jan 29 11:48:30.312742 containerd[1461]: time="2025-01-29T11:48:30.311208709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:30.312742 containerd[1461]: time="2025-01-29T11:48:30.311278202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:30.312742 containerd[1461]: time="2025-01-29T11:48:30.311291728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.312742 containerd[1461]: time="2025-01-29T11:48:30.311385077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.314148 containerd[1461]: time="2025-01-29T11:48:30.313713150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:30.314148 containerd[1461]: time="2025-01-29T11:48:30.313802460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:30.314148 containerd[1461]: time="2025-01-29T11:48:30.313825044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.314148 containerd[1461]: time="2025-01-29T11:48:30.313950884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.314974 containerd[1461]: time="2025-01-29T11:48:30.314483543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:30.314974 containerd[1461]: time="2025-01-29T11:48:30.314592401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:30.316715 containerd[1461]: time="2025-01-29T11:48:30.314652387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.316715 containerd[1461]: time="2025-01-29T11:48:30.315647930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:30.342138 systemd[1]: Started cri-containerd-f245b7e7118d5e467331c744c2bd7dedc8823fdcdf672c786c7021d2ffcda02d.scope - libcontainer container f245b7e7118d5e467331c744c2bd7dedc8823fdcdf672c786c7021d2ffcda02d. Jan 29 11:48:30.348603 systemd[1]: Started cri-containerd-5d51b7cfbd3810fea53bc48ec3f95cd63dbe3c36b567a7907d9404f79fca318f.scope - libcontainer container 5d51b7cfbd3810fea53bc48ec3f95cd63dbe3c36b567a7907d9404f79fca318f. Jan 29 11:48:30.350901 systemd[1]: Started cri-containerd-c8dc296de2056c3f7f17df59c42c8142f35462fd7d16d745adba334769a00260.scope - libcontainer container c8dc296de2056c3f7f17df59c42c8142f35462fd7d16d745adba334769a00260. Jan 29 11:48:30.435266 containerd[1461]: time="2025-01-29T11:48:30.435099127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d51b7cfbd3810fea53bc48ec3f95cd63dbe3c36b567a7907d9404f79fca318f\"" Jan 29 11:48:30.435266 containerd[1461]: time="2025-01-29T11:48:30.435183939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8dc296de2056c3f7f17df59c42c8142f35462fd7d16d745adba334769a00260\"" Jan 29 11:48:30.436338 kubelet[2127]: E0129 11:48:30.436297 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:30.436633 kubelet[2127]: E0129 11:48:30.436609 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:30.439534 containerd[1461]: time="2025-01-29T11:48:30.439444399Z" level=info msg="CreateContainer within sandbox \"c8dc296de2056c3f7f17df59c42c8142f35462fd7d16d745adba334769a00260\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:48:30.439658 containerd[1461]: time="2025-01-29T11:48:30.439595759Z" level=info msg="CreateContainer within sandbox \"5d51b7cfbd3810fea53bc48ec3f95cd63dbe3c36b567a7907d9404f79fca318f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:48:30.449449 containerd[1461]: time="2025-01-29T11:48:30.449300833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d94e506e23b01ee6d048e2e0618a2018,Namespace:kube-system,Attempt:0,} returns sandbox id \"f245b7e7118d5e467331c744c2bd7dedc8823fdcdf672c786c7021d2ffcda02d\"" Jan 29 11:48:30.450294 kubelet[2127]: E0129 11:48:30.450259 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:30.452203 containerd[1461]: time="2025-01-29T11:48:30.452162397Z" level=info msg="CreateContainer within sandbox \"f245b7e7118d5e467331c744c2bd7dedc8823fdcdf672c786c7021d2ffcda02d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:48:30.483061 containerd[1461]: time="2025-01-29T11:48:30.482916366Z" level=info msg="CreateContainer within sandbox \"c8dc296de2056c3f7f17df59c42c8142f35462fd7d16d745adba334769a00260\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96d860aad65ee611332a6b4e30de7d31ab5da09a07fb254c5da078e4591d2515\"" Jan 29 11:48:30.484397 containerd[1461]: time="2025-01-29T11:48:30.484022792Z" level=info msg="StartContainer for \"96d860aad65ee611332a6b4e30de7d31ab5da09a07fb254c5da078e4591d2515\"" Jan 29 11:48:30.495648 containerd[1461]: time="2025-01-29T11:48:30.495581252Z" level=info msg="CreateContainer within sandbox \"5d51b7cfbd3810fea53bc48ec3f95cd63dbe3c36b567a7907d9404f79fca318f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd4efe74c68840b861d7301564400f7e2df638f4b046356305a9b3764268ea2d\"" Jan 29 11:48:30.496729 containerd[1461]: time="2025-01-29T11:48:30.496487826Z" level=info msg="StartContainer for \"fd4efe74c68840b861d7301564400f7e2df638f4b046356305a9b3764268ea2d\"" Jan 29 11:48:30.502489 containerd[1461]: time="2025-01-29T11:48:30.502373686Z" level=info msg="CreateContainer within sandbox \"f245b7e7118d5e467331c744c2bd7dedc8823fdcdf672c786c7021d2ffcda02d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b5a3c004f7c853930deb2f8ac52897debfea5b6699b75e07b20fb1ea4da2009\"" Jan 29 11:48:30.503708 containerd[1461]: time="2025-01-29T11:48:30.503624959Z" level=info msg="StartContainer for \"5b5a3c004f7c853930deb2f8ac52897debfea5b6699b75e07b20fb1ea4da2009\"" Jan 29 11:48:30.537056 systemd[1]: Started cri-containerd-fd4efe74c68840b861d7301564400f7e2df638f4b046356305a9b3764268ea2d.scope - libcontainer container fd4efe74c68840b861d7301564400f7e2df638f4b046356305a9b3764268ea2d. Jan 29 11:48:30.542553 systemd[1]: Started cri-containerd-5b5a3c004f7c853930deb2f8ac52897debfea5b6699b75e07b20fb1ea4da2009.scope - libcontainer container 5b5a3c004f7c853930deb2f8ac52897debfea5b6699b75e07b20fb1ea4da2009. Jan 29 11:48:30.544749 systemd[1]: Started cri-containerd-96d860aad65ee611332a6b4e30de7d31ab5da09a07fb254c5da078e4591d2515.scope - libcontainer container 96d860aad65ee611332a6b4e30de7d31ab5da09a07fb254c5da078e4591d2515. Jan 29 11:48:30.642811 containerd[1461]: time="2025-01-29T11:48:30.642562110Z" level=info msg="StartContainer for \"fd4efe74c68840b861d7301564400f7e2df638f4b046356305a9b3764268ea2d\" returns successfully" Jan 29 11:48:30.643922 containerd[1461]: time="2025-01-29T11:48:30.642589172Z" level=info msg="StartContainer for \"5b5a3c004f7c853930deb2f8ac52897debfea5b6699b75e07b20fb1ea4da2009\" returns successfully" Jan 29 11:48:30.643922 containerd[1461]: time="2025-01-29T11:48:30.642646672Z" level=info msg="StartContainer for \"96d860aad65ee611332a6b4e30de7d31ab5da09a07fb254c5da078e4591d2515\" returns successfully" Jan 29 11:48:31.224542 kubelet[2127]: E0129 11:48:31.224472 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:31.225881 kubelet[2127]: E0129 11:48:31.225563 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:31.226995 kubelet[2127]: E0129 11:48:31.226971 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:32.230472 kubelet[2127]: E0129 11:48:32.230393 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:32.542622 kubelet[2127]: E0129 11:48:32.542311 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:48:32.669338 kubelet[2127]: E0129 11:48:32.669194 2127 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f2767259ef3fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:25.154589691 +0000 UTC m=+0.529339262,LastTimestamp:2025-01-29 11:48:25.154589691 +0000 UTC m=+0.529339262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:32.796960 kubelet[2127]: E0129 11:48:32.795789 2127 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f276725edcd04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:25.15975706 +0000 UTC m=+0.534506631,LastTimestamp:2025-01-29 11:48:25.15975706 +0000 UTC m=+0.534506631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:32.972588 kubelet[2127]: E0129 11:48:32.972418 2127 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f276727fe7957 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:25.194404183 +0000 UTC m=+0.569153754,LastTimestamp:2025-01-29 11:48:25.194404183 +0000 UTC m=+0.569153754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:33.137335 kubelet[2127]: E0129 11:48:33.137091 2127 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f276727feb81d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:25.194420253 +0000 UTC m=+0.569169824,LastTimestamp:2025-01-29 11:48:25.194420253 +0000 UTC m=+0.569169824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:33.148764 kubelet[2127]: E0129 11:48:33.148729 2127 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:48:33.231542 kubelet[2127]: I0129 11:48:33.231493 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:33.251816 kubelet[2127]: I0129 11:48:33.251776 2127 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:48:33.251816 kubelet[2127]: E0129 11:48:33.251809 2127 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:48:33.274529 kubelet[2127]: E0129 11:48:33.274458 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:33.374812 kubelet[2127]: E0129 11:48:33.374766 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:33.475320 kubelet[2127]: E0129 11:48:33.475141 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:33.576981 kubelet[2127]: E0129 11:48:33.576924 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:33.677326 kubelet[2127]: E0129 11:48:33.677275 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:33.777903 kubelet[2127]: E0129 11:48:33.777760 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:34.137829 kubelet[2127]: I0129 11:48:34.137672 2127 apiserver.go:52] "Watching apiserver" Jan 29 11:48:34.157944 kubelet[2127]: I0129 11:48:34.157891 2127 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:48:35.044583 kubelet[2127]: E0129 11:48:35.044535 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:35.226994 kubelet[2127]: I0129 11:48:35.226929 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2268907709999999 podStartE2EDuration="1.226890771s" podCreationTimestamp="2025-01-29 11:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:35.226866936 +0000 UTC m=+10.601616527" watchObservedRunningTime="2025-01-29 11:48:35.226890771 +0000 UTC m=+10.601640342" Jan 29 11:48:35.234109 kubelet[2127]: E0129 11:48:35.234073 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:35.264439 systemd[1]: Reloading requested from client PID 2409 ('systemctl') (unit session-7.scope)... Jan 29 11:48:35.264454 systemd[1]: Reloading... Jan 29 11:48:35.350870 zram_generator::config[2451]: No configuration found. Jan 29 11:48:35.460707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:48:35.552996 systemd[1]: Reloading finished in 288 ms. Jan 29 11:48:35.604156 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:35.610180 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:48:35.610599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:35.610685 systemd[1]: kubelet.service: Consumed 1.166s CPU time, 123.4M memory peak, 0B memory swap peak. Jan 29 11:48:35.630109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:35.777186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:35.782374 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:48:35.828213 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:35.828213 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:48:35.828213 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:35.828700 kubelet[2493]: I0129 11:48:35.828253 2493 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:48:35.834495 kubelet[2493]: I0129 11:48:35.834451 2493 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:48:35.834495 kubelet[2493]: I0129 11:48:35.834475 2493 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:48:35.835983 kubelet[2493]: I0129 11:48:35.834780 2493 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:48:35.837143 kubelet[2493]: I0129 11:48:35.837102 2493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:48:35.839669 kubelet[2493]: I0129 11:48:35.839566 2493 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:48:35.842599 kubelet[2493]: E0129 11:48:35.842568 2493 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:48:35.842599 kubelet[2493]: I0129 11:48:35.842600 2493 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:48:35.847379 kubelet[2493]: I0129 11:48:35.847348 2493 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:48:35.847474 kubelet[2493]: I0129 11:48:35.847451 2493 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:48:35.847612 kubelet[2493]: I0129 11:48:35.847577 2493 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:48:35.847765 kubelet[2493]: I0129 11:48:35.847608 2493 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:48:35.847905 kubelet[2493]: I0129 11:48:35.847770 2493 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:48:35.847905 kubelet[2493]: I0129 11:48:35.847779 2493 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:48:35.847905 kubelet[2493]: I0129 11:48:35.847809 2493 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:35.848161 kubelet[2493]: I0129 11:48:35.847932 2493 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:48:35.848161 kubelet[2493]: I0129 11:48:35.847953 2493 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:48:35.848161 kubelet[2493]: I0129 11:48:35.847976 2493 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:48:35.848161 kubelet[2493]: I0129 11:48:35.847986 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:48:35.849083 kubelet[2493]: I0129 11:48:35.849064 2493 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:48:35.849418 kubelet[2493]: I0129 11:48:35.849401 2493 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:48:35.850415 kubelet[2493]: I0129 11:48:35.850379 2493 server.go:1269] "Started kubelet" Jan 29 11:48:35.852069 kubelet[2493]: I0129 11:48:35.852011 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:48:35.852385 kubelet[2493]: I0129 11:48:35.852367 2493 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:48:35.852569 kubelet[2493]: I0129 11:48:35.852534 2493 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:48:35.861488 kubelet[2493]: I0129 11:48:35.861332 2493 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:48:35.861616 kubelet[2493]: I0129 11:48:35.861601 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:48:35.862610 kubelet[2493]: I0129 11:48:35.862262 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:48:35.862610 kubelet[2493]: E0129 11:48:35.862287 2493 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:48:35.863177 kubelet[2493]: I0129 11:48:35.863162 2493 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:48:35.863264 kubelet[2493]: I0129 11:48:35.863245 2493 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:48:35.863461 kubelet[2493]: I0129 11:48:35.863377 2493 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:48:35.865747 kubelet[2493]: I0129 11:48:35.865721 2493 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:48:35.865747 kubelet[2493]: I0129 11:48:35.865737 2493 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:48:35.865881 kubelet[2493]: I0129 11:48:35.865812 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:48:35.876342 kubelet[2493]: I0129 11:48:35.876182 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:48:35.877489 kubelet[2493]: I0129 11:48:35.877469 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:48:35.877546 kubelet[2493]: I0129 11:48:35.877505 2493 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:48:35.877575 kubelet[2493]: I0129 11:48:35.877547 2493 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:48:35.877676 kubelet[2493]: E0129 11:48:35.877608 2493 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:48:35.908231 kubelet[2493]: I0129 11:48:35.908187 2493 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:48:35.908231 kubelet[2493]: I0129 11:48:35.908218 2493 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:48:35.908231 kubelet[2493]: I0129 11:48:35.908241 2493 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:35.908458 kubelet[2493]: I0129 11:48:35.908440 2493 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:48:35.908488 kubelet[2493]: I0129 11:48:35.908456 2493 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:48:35.908488 kubelet[2493]: I0129 11:48:35.908480 2493 policy_none.go:49] "None policy: Start" Jan 29 11:48:35.909139 kubelet[2493]: I0129 11:48:35.909111 2493 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:48:35.909139 kubelet[2493]: I0129 11:48:35.909134 2493 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:48:35.909355 kubelet[2493]: I0129 11:48:35.909339 2493 state_mem.go:75] "Updated machine memory state" Jan 29 11:48:35.914196 kubelet[2493]: I0129 11:48:35.913716 2493 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:48:35.914196 kubelet[2493]: I0129 11:48:35.913954 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:48:35.914196 kubelet[2493]: I0129 11:48:35.913968 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:48:35.914196 kubelet[2493]: I0129 11:48:35.914134 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:48:35.986027 kubelet[2493]: E0129 11:48:35.985964 2493 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:36.031758 kubelet[2493]: I0129 11:48:36.031600 2493 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:36.039671 kubelet[2493]: I0129 11:48:36.039630 2493 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:48:36.039789 kubelet[2493]: I0129 11:48:36.039723 2493 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:48:36.057388 sudo[2532]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:48:36.057725 sudo[2532]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:48:36.164796 kubelet[2493]: I0129 11:48:36.164749 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:36.164918 kubelet[2493]: I0129 11:48:36.164800 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:36.164918 kubelet[2493]: I0129 11:48:36.164854 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:36.164918 kubelet[2493]: I0129 11:48:36.164907 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:36.164999 kubelet[2493]: I0129 11:48:36.164929 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d94e506e23b01ee6d048e2e0618a2018-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d94e506e23b01ee6d048e2e0618a2018\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:36.165036 kubelet[2493]: I0129 11:48:36.164996 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:36.165578 kubelet[2493]: I0129 11:48:36.165062 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:36.165578 kubelet[2493]: I0129 11:48:36.165097 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:36.165578 kubelet[2493]: I0129 11:48:36.165120 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:48:36.285090 kubelet[2493]: E0129 11:48:36.284951 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.286032 kubelet[2493]: E0129 11:48:36.285976 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.287169 kubelet[2493]: E0129 11:48:36.287153 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.631395 sudo[2532]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:36.848998 kubelet[2493]: I0129 11:48:36.848958 2493 apiserver.go:52] "Watching apiserver" Jan 29 11:48:36.864168 kubelet[2493]: I0129 11:48:36.864116 2493 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:48:36.888881 kubelet[2493]: E0129 11:48:36.888766 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.889033 kubelet[2493]: E0129 11:48:36.889014 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.889240 kubelet[2493]: E0129 11:48:36.889207 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:36.913405 kubelet[2493]: I0129 11:48:36.913312 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.913292256 podStartE2EDuration="1.913292256s" podCreationTimestamp="2025-01-29 11:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:36.90261898 +0000 UTC m=+1.115844036" watchObservedRunningTime="2025-01-29 11:48:36.913292256 +0000 UTC m=+1.126517312" Jan 29 11:48:36.923585 kubelet[2493]: I0129 11:48:36.923516 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.923498125 podStartE2EDuration="1.923498125s" podCreationTimestamp="2025-01-29 11:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:36.913599591 +0000 UTC m=+1.126824657" watchObservedRunningTime="2025-01-29 11:48:36.923498125 +0000 UTC m=+1.136723181" Jan 29 11:48:37.890417 kubelet[2493]: E0129 11:48:37.889986 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:37.962031 kubelet[2493]: E0129 11:48:37.961976 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:38.179591 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:38.182125 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:38.186573 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:51780.service: Deactivated successfully. Jan 29 11:48:38.188633 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:48:38.188816 systemd[1]: session-7.scope: Consumed 4.756s CPU time, 159.4M memory peak, 0B memory swap peak. Jan 29 11:48:38.189368 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:48:38.190456 systemd-logind[1448]: Removed session 7. Jan 29 11:48:40.560184 kubelet[2493]: I0129 11:48:40.560136 2493 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:48:40.560747 kubelet[2493]: I0129 11:48:40.560573 2493 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:48:40.560787 containerd[1461]: time="2025-01-29T11:48:40.560432027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:48:41.490818 kubelet[2493]: E0129 11:48:41.490782 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:41.797248 kubelet[2493]: I0129 11:48:41.797089 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrwt\" (UniqueName: \"kubernetes.io/projected/ee80462c-8e28-46b8-97f0-1ecf62154fad-kube-api-access-jwrwt\") pod \"kube-proxy-llnmc\" (UID: \"ee80462c-8e28-46b8-97f0-1ecf62154fad\") " pod="kube-system/kube-proxy-llnmc" Jan 29 11:48:41.797248 kubelet[2493]: I0129 11:48:41.797133 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee80462c-8e28-46b8-97f0-1ecf62154fad-xtables-lock\") pod \"kube-proxy-llnmc\" (UID: \"ee80462c-8e28-46b8-97f0-1ecf62154fad\") " pod="kube-system/kube-proxy-llnmc" Jan 29 11:48:41.797248 kubelet[2493]: I0129 11:48:41.797152 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee80462c-8e28-46b8-97f0-1ecf62154fad-lib-modules\") pod \"kube-proxy-llnmc\" (UID: \"ee80462c-8e28-46b8-97f0-1ecf62154fad\") " pod="kube-system/kube-proxy-llnmc" Jan 29 11:48:41.797248 kubelet[2493]: I0129 11:48:41.797167 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee80462c-8e28-46b8-97f0-1ecf62154fad-kube-proxy\") pod \"kube-proxy-llnmc\" (UID: \"ee80462c-8e28-46b8-97f0-1ecf62154fad\") " pod="kube-system/kube-proxy-llnmc" Jan 29 11:48:41.801024 systemd[1]: Created slice kubepods-besteffort-podee80462c_8e28_46b8_97f0_1ecf62154fad.slice - libcontainer container kubepods-besteffort-podee80462c_8e28_46b8_97f0_1ecf62154fad.slice. Jan 29 11:48:41.814259 systemd[1]: Created slice kubepods-burstable-pod7fd9ccf1_806b_4052_9f45_56956988db8a.slice - libcontainer container kubepods-burstable-pod7fd9ccf1_806b_4052_9f45_56956988db8a.slice. Jan 29 11:48:41.895478 kubelet[2493]: E0129 11:48:41.895444 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:41.898054 kubelet[2493]: I0129 11:48:41.898020 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-etc-cni-netd\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898099 kubelet[2493]: I0129 11:48:41.898067 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-hostproc\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898099 kubelet[2493]: I0129 11:48:41.898093 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-bpf-maps\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898151 kubelet[2493]: I0129 11:48:41.898108 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-lib-modules\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898151 kubelet[2493]: I0129 11:48:41.898126 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-net\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898151 kubelet[2493]: I0129 11:48:41.898142 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-config-path\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898228 kubelet[2493]: I0129 11:48:41.898156 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-cgroup\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898228 kubelet[2493]: I0129 11:48:41.898173 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cni-path\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898228 kubelet[2493]: I0129 11:48:41.898187 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fd9ccf1-806b-4052-9f45-56956988db8a-clustermesh-secrets\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898228 kubelet[2493]: I0129 11:48:41.898201 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-kernel\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898228 kubelet[2493]: I0129 11:48:41.898218 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg74\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-kube-api-access-slg74\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898351 kubelet[2493]: I0129 11:48:41.898234 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-hubble-tls\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898351 kubelet[2493]: I0129 11:48:41.898293 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-run\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:41.898351 kubelet[2493]: I0129 11:48:41.898312 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-xtables-lock\") pod \"cilium-h65zp\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " pod="kube-system/cilium-h65zp" Jan 29 11:48:42.078399 systemd[1]: Created slice kubepods-besteffort-pod12cc4805_c777_4e2b_bd44_8f1f312e752b.slice - libcontainer container kubepods-besteffort-pod12cc4805_c777_4e2b_bd44_8f1f312e752b.slice. Jan 29 11:48:42.099374 kubelet[2493]: I0129 11:48:42.099343 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7www\" (UniqueName: \"kubernetes.io/projected/12cc4805-c777-4e2b-bd44-8f1f312e752b-kube-api-access-t7www\") pod \"cilium-operator-5d85765b45-cvrcr\" (UID: \"12cc4805-c777-4e2b-bd44-8f1f312e752b\") " pod="kube-system/cilium-operator-5d85765b45-cvrcr" Jan 29 11:48:42.099473 kubelet[2493]: I0129 11:48:42.099381 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12cc4805-c777-4e2b-bd44-8f1f312e752b-cilium-config-path\") pod \"cilium-operator-5d85765b45-cvrcr\" (UID: \"12cc4805-c777-4e2b-bd44-8f1f312e752b\") " pod="kube-system/cilium-operator-5d85765b45-cvrcr" Jan 29 11:48:42.381749 kubelet[2493]: E0129 11:48:42.381603 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.382358 containerd[1461]: time="2025-01-29T11:48:42.382323623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvrcr,Uid:12cc4805-c777-4e2b-bd44-8f1f312e752b,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:42.402517 update_engine[1451]: I20250129 11:48:42.402447 1451 update_attempter.cc:509] Updating boot flags... Jan 29 11:48:42.410484 kubelet[2493]: E0129 11:48:42.410447 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.410962 containerd[1461]: time="2025-01-29T11:48:42.410929299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llnmc,Uid:ee80462c-8e28-46b8-97f0-1ecf62154fad,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:42.417192 kubelet[2493]: E0129 11:48:42.417174 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.417555 containerd[1461]: time="2025-01-29T11:48:42.417522566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h65zp,Uid:7fd9ccf1-806b-4052-9f45-56956988db8a,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:42.461889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2583) Jan 29 11:48:42.496877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2584) Jan 29 11:48:42.585875 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2584) Jan 29 11:48:42.762214 containerd[1461]: time="2025-01-29T11:48:42.758354908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:42.762214 containerd[1461]: time="2025-01-29T11:48:42.758424830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:42.762214 containerd[1461]: time="2025-01-29T11:48:42.758449216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.762214 containerd[1461]: time="2025-01-29T11:48:42.758588601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.765562 containerd[1461]: time="2025-01-29T11:48:42.765196886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:42.765562 containerd[1461]: time="2025-01-29T11:48:42.765257551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:42.765562 containerd[1461]: time="2025-01-29T11:48:42.765284111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.765562 containerd[1461]: time="2025-01-29T11:48:42.765379272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.833289 systemd[1]: Started cri-containerd-0be06a5f72335b96a7289832a48a00f0439ec8740324b4ee66398de5ee6583e2.scope - libcontainer container 0be06a5f72335b96a7289832a48a00f0439ec8740324b4ee66398de5ee6583e2. Jan 29 11:48:42.837246 systemd[1]: Started cri-containerd-8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4.scope - libcontainer container 8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4. Jan 29 11:48:42.843872 containerd[1461]: time="2025-01-29T11:48:42.843520772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:42.843872 containerd[1461]: time="2025-01-29T11:48:42.843637252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:42.843872 containerd[1461]: time="2025-01-29T11:48:42.843654174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.843872 containerd[1461]: time="2025-01-29T11:48:42.843743051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:42.862994 systemd[1]: Started cri-containerd-ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73.scope - libcontainer container ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73. Jan 29 11:48:42.864334 containerd[1461]: time="2025-01-29T11:48:42.864300055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llnmc,Uid:ee80462c-8e28-46b8-97f0-1ecf62154fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"0be06a5f72335b96a7289832a48a00f0439ec8740324b4ee66398de5ee6583e2\"" Jan 29 11:48:42.865614 kubelet[2493]: E0129 11:48:42.865116 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.868898 containerd[1461]: time="2025-01-29T11:48:42.868865114Z" level=info msg="CreateContainer within sandbox \"0be06a5f72335b96a7289832a48a00f0439ec8740324b4ee66398de5ee6583e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:48:42.885419 containerd[1461]: time="2025-01-29T11:48:42.885371232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvrcr,Uid:12cc4805-c777-4e2b-bd44-8f1f312e752b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\"" Jan 29 11:48:42.886548 kubelet[2493]: E0129 11:48:42.886513 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.890061 containerd[1461]: time="2025-01-29T11:48:42.890018216Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:48:42.894613 containerd[1461]: time="2025-01-29T11:48:42.894560432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h65zp,Uid:7fd9ccf1-806b-4052-9f45-56956988db8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\"" Jan 29 11:48:42.895524 kubelet[2493]: E0129 11:48:42.895492 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:42.900378 containerd[1461]: time="2025-01-29T11:48:42.900335822Z" level=info msg="CreateContainer within sandbox \"0be06a5f72335b96a7289832a48a00f0439ec8740324b4ee66398de5ee6583e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6106fdc6c49e40b34c6baca9faef0c251abc9434eff51f46f9bf2e6358fb27b2\"" Jan 29 11:48:42.900966 containerd[1461]: time="2025-01-29T11:48:42.900906893Z" level=info msg="StartContainer for \"6106fdc6c49e40b34c6baca9faef0c251abc9434eff51f46f9bf2e6358fb27b2\"" Jan 29 11:48:42.932034 systemd[1]: Started cri-containerd-6106fdc6c49e40b34c6baca9faef0c251abc9434eff51f46f9bf2e6358fb27b2.scope - libcontainer container 6106fdc6c49e40b34c6baca9faef0c251abc9434eff51f46f9bf2e6358fb27b2. Jan 29 11:48:42.965491 containerd[1461]: time="2025-01-29T11:48:42.965437755Z" level=info msg="StartContainer for \"6106fdc6c49e40b34c6baca9faef0c251abc9434eff51f46f9bf2e6358fb27b2\" returns successfully" Jan 29 11:48:43.904658 kubelet[2493]: E0129 11:48:43.904621 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:43.984388 kubelet[2493]: I0129 11:48:43.984296 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-llnmc" podStartSLOduration=2.984276305 podStartE2EDuration="2.984276305s" podCreationTimestamp="2025-01-29 11:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:43.984147933 +0000 UTC m=+8.197372989" watchObservedRunningTime="2025-01-29 11:48:43.984276305 +0000 UTC m=+8.197501362" Jan 29 11:48:44.803740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306971967.mount: Deactivated successfully. Jan 29 11:48:44.906514 kubelet[2493]: E0129 11:48:44.906471 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:45.148764 containerd[1461]: time="2025-01-29T11:48:45.148605428Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:45.149726 containerd[1461]: time="2025-01-29T11:48:45.149674138Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:48:45.150997 containerd[1461]: time="2025-01-29T11:48:45.150945259Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:45.152680 containerd[1461]: time="2025-01-29T11:48:45.152637026Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.262570227s" Jan 29 11:48:45.152736 containerd[1461]: time="2025-01-29T11:48:45.152678033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:48:45.153866 containerd[1461]: time="2025-01-29T11:48:45.153822035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:48:45.155348 containerd[1461]: time="2025-01-29T11:48:45.155317249Z" level=info msg="CreateContainer within sandbox \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:48:45.168859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525880027.mount: Deactivated successfully. Jan 29 11:48:45.186467 containerd[1461]: time="2025-01-29T11:48:45.186405497Z" level=info msg="CreateContainer within sandbox \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\"" Jan 29 11:48:45.187675 containerd[1461]: time="2025-01-29T11:48:45.187035117Z" level=info msg="StartContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\"" Jan 29 11:48:45.219058 systemd[1]: Started cri-containerd-de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76.scope - libcontainer container de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76. Jan 29 11:48:45.319877 containerd[1461]: time="2025-01-29T11:48:45.319805692Z" level=info msg="StartContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" returns successfully" Jan 29 11:48:45.910581 kubelet[2493]: E0129 11:48:45.910482 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:45.911463 kubelet[2493]: E0129 11:48:45.911395 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:45.971449 kubelet[2493]: I0129 11:48:45.971364 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cvrcr" podStartSLOduration=2.707043733 podStartE2EDuration="4.97134353s" podCreationTimestamp="2025-01-29 11:48:41 +0000 UTC" firstStartedPulling="2025-01-29 11:48:42.889296801 +0000 UTC m=+7.102521857" lastFinishedPulling="2025-01-29 11:48:45.153596578 +0000 UTC m=+9.366821654" observedRunningTime="2025-01-29 11:48:45.945277091 +0000 UTC m=+10.158502147" watchObservedRunningTime="2025-01-29 11:48:45.97134353 +0000 UTC m=+10.184568586" Jan 29 11:48:46.912601 kubelet[2493]: E0129 11:48:46.912564 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:46.913263 kubelet[2493]: E0129 11:48:46.913206 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:47.914580 kubelet[2493]: E0129 11:48:47.914537 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:47.965509 kubelet[2493]: E0129 11:48:47.965471 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:50.834093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748550984.mount: Deactivated successfully. Jan 29 11:48:54.997804 containerd[1461]: time="2025-01-29T11:48:54.997738651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:54.998566 containerd[1461]: time="2025-01-29T11:48:54.998478966Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:48:54.999672 containerd[1461]: time="2025-01-29T11:48:54.999633821Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:55.001294 containerd[1461]: time="2025-01-29T11:48:55.001243612Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.84737007s" Jan 29 11:48:55.001294 containerd[1461]: time="2025-01-29T11:48:55.001292043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:48:55.025151 containerd[1461]: time="2025-01-29T11:48:55.025090184Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:48:55.045725 containerd[1461]: time="2025-01-29T11:48:55.045657052Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\"" Jan 29 11:48:55.049001 containerd[1461]: time="2025-01-29T11:48:55.048958727Z" level=info msg="StartContainer for \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\"" Jan 29 11:48:55.084983 systemd[1]: Started cri-containerd-8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a.scope - libcontainer container 8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a. Jan 29 11:48:55.117078 containerd[1461]: time="2025-01-29T11:48:55.117024344Z" level=info msg="StartContainer for \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\" returns successfully" Jan 29 11:48:55.129197 systemd[1]: cri-containerd-8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a.scope: Deactivated successfully. Jan 29 11:48:55.623906 containerd[1461]: time="2025-01-29T11:48:55.621444840Z" level=info msg="shim disconnected" id=8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a namespace=k8s.io Jan 29 11:48:55.623906 containerd[1461]: time="2025-01-29T11:48:55.623900353Z" level=warning msg="cleaning up after shim disconnected" id=8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a namespace=k8s.io Jan 29 11:48:55.623906 containerd[1461]: time="2025-01-29T11:48:55.623912405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:55.928980 kubelet[2493]: E0129 11:48:55.928946 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:55.931162 containerd[1461]: time="2025-01-29T11:48:55.931118215Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:48:55.947722 containerd[1461]: time="2025-01-29T11:48:55.947472792Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\"" Jan 29 11:48:55.948996 containerd[1461]: time="2025-01-29T11:48:55.948953229Z" level=info msg="StartContainer for \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\"" Jan 29 11:48:55.977043 systemd[1]: Started cri-containerd-5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1.scope - libcontainer container 5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1. Jan 29 11:48:56.007308 containerd[1461]: time="2025-01-29T11:48:56.007263899Z" level=info msg="StartContainer for \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\" returns successfully" Jan 29 11:48:56.018061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:48:56.018386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:48:56.018469 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:48:56.025370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:48:56.025668 systemd[1]: cri-containerd-5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1.scope: Deactivated successfully. Jan 29 11:48:56.041552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a-rootfs.mount: Deactivated successfully. Jan 29 11:48:56.043810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1-rootfs.mount: Deactivated successfully. Jan 29 11:48:56.051475 containerd[1461]: time="2025-01-29T11:48:56.051396269Z" level=info msg="shim disconnected" id=5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1 namespace=k8s.io Jan 29 11:48:56.051475 containerd[1461]: time="2025-01-29T11:48:56.051458586Z" level=warning msg="cleaning up after shim disconnected" id=5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1 namespace=k8s.io Jan 29 11:48:56.051475 containerd[1461]: time="2025-01-29T11:48:56.051466963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:56.058035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:48:56.932692 kubelet[2493]: E0129 11:48:56.932646 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:56.937047 containerd[1461]: time="2025-01-29T11:48:56.936991196Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:48:56.956565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419050341.mount: Deactivated successfully. Jan 29 11:48:56.960775 containerd[1461]: time="2025-01-29T11:48:56.960718292Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\"" Jan 29 11:48:56.961437 containerd[1461]: time="2025-01-29T11:48:56.961298424Z" level=info msg="StartContainer for \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\"" Jan 29 11:48:56.990154 systemd[1]: Started cri-containerd-9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091.scope - libcontainer container 9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091. Jan 29 11:48:57.022489 containerd[1461]: time="2025-01-29T11:48:57.022404900Z" level=info msg="StartContainer for \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\" returns successfully" Jan 29 11:48:57.023262 systemd[1]: cri-containerd-9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091.scope: Deactivated successfully. Jan 29 11:48:57.046265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091-rootfs.mount: Deactivated successfully. Jan 29 11:48:57.052062 containerd[1461]: time="2025-01-29T11:48:57.051998034Z" level=info msg="shim disconnected" id=9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091 namespace=k8s.io Jan 29 11:48:57.052190 containerd[1461]: time="2025-01-29T11:48:57.052064830Z" level=warning msg="cleaning up after shim disconnected" id=9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091 namespace=k8s.io Jan 29 11:48:57.052190 containerd[1461]: time="2025-01-29T11:48:57.052080400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:57.936104 kubelet[2493]: E0129 11:48:57.936033 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:57.937664 containerd[1461]: time="2025-01-29T11:48:57.937622834Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:48:57.953647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086057448.mount: Deactivated successfully. Jan 29 11:48:57.955074 containerd[1461]: time="2025-01-29T11:48:57.954808151Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\"" Jan 29 11:48:57.955479 containerd[1461]: time="2025-01-29T11:48:57.955457253Z" level=info msg="StartContainer for \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\"" Jan 29 11:48:57.983985 systemd[1]: Started cri-containerd-ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14.scope - libcontainer container ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14. Jan 29 11:48:58.007125 systemd[1]: cri-containerd-ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14.scope: Deactivated successfully. Jan 29 11:48:58.008599 containerd[1461]: time="2025-01-29T11:48:58.008562423Z" level=info msg="StartContainer for \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\" returns successfully" Jan 29 11:48:58.031828 containerd[1461]: time="2025-01-29T11:48:58.031761837Z" level=info msg="shim disconnected" id=ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14 namespace=k8s.io Jan 29 11:48:58.031828 containerd[1461]: time="2025-01-29T11:48:58.031823232Z" level=warning msg="cleaning up after shim disconnected" id=ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14 namespace=k8s.io Jan 29 11:48:58.032305 containerd[1461]: time="2025-01-29T11:48:58.031849512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:58.040796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14-rootfs.mount: Deactivated successfully. Jan 29 11:48:58.946096 kubelet[2493]: E0129 11:48:58.946030 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:58.947750 containerd[1461]: time="2025-01-29T11:48:58.947701539Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:48:58.966750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987628001.mount: Deactivated successfully. Jan 29 11:48:58.967967 containerd[1461]: time="2025-01-29T11:48:58.967922870Z" level=info msg="CreateContainer within sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\"" Jan 29 11:48:58.968611 containerd[1461]: time="2025-01-29T11:48:58.968558606Z" level=info msg="StartContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\"" Jan 29 11:48:58.997972 systemd[1]: Started cri-containerd-9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c.scope - libcontainer container 9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c. Jan 29 11:48:59.029803 containerd[1461]: time="2025-01-29T11:48:59.029743897Z" level=info msg="StartContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" returns successfully" Jan 29 11:48:59.054354 systemd[1]: run-containerd-runc-k8s.io-9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c-runc.7Z6tqx.mount: Deactivated successfully. Jan 29 11:48:59.138230 kubelet[2493]: I0129 11:48:59.138173 2493 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:48:59.176877 systemd[1]: Created slice kubepods-burstable-poda24be675_4908_4650_a410_10e8909b6d34.slice - libcontainer container kubepods-burstable-poda24be675_4908_4650_a410_10e8909b6d34.slice. Jan 29 11:48:59.185194 systemd[1]: Created slice kubepods-burstable-pod9e584d43_93d9_4b1b_b899_66d38266fe33.slice - libcontainer container kubepods-burstable-pod9e584d43_93d9_4b1b_b899_66d38266fe33.slice. Jan 29 11:48:59.203306 kubelet[2493]: I0129 11:48:59.203205 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj969\" (UniqueName: \"kubernetes.io/projected/9e584d43-93d9-4b1b-b899-66d38266fe33-kube-api-access-fj969\") pod \"coredns-6f6b679f8f-rrkxl\" (UID: \"9e584d43-93d9-4b1b-b899-66d38266fe33\") " pod="kube-system/coredns-6f6b679f8f-rrkxl" Jan 29 11:48:59.203306 kubelet[2493]: I0129 11:48:59.203247 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwp8\" (UniqueName: \"kubernetes.io/projected/a24be675-4908-4650-a410-10e8909b6d34-kube-api-access-bcwp8\") pod \"coredns-6f6b679f8f-nf7bv\" (UID: \"a24be675-4908-4650-a410-10e8909b6d34\") " pod="kube-system/coredns-6f6b679f8f-nf7bv" Jan 29 11:48:59.203306 kubelet[2493]: I0129 11:48:59.203263 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e584d43-93d9-4b1b-b899-66d38266fe33-config-volume\") pod \"coredns-6f6b679f8f-rrkxl\" (UID: \"9e584d43-93d9-4b1b-b899-66d38266fe33\") " pod="kube-system/coredns-6f6b679f8f-rrkxl" Jan 29 11:48:59.203306 kubelet[2493]: I0129 11:48:59.203278 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24be675-4908-4650-a410-10e8909b6d34-config-volume\") pod \"coredns-6f6b679f8f-nf7bv\" (UID: \"a24be675-4908-4650-a410-10e8909b6d34\") " pod="kube-system/coredns-6f6b679f8f-nf7bv" Jan 29 11:48:59.481465 kubelet[2493]: E0129 11:48:59.481308 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:59.485228 containerd[1461]: time="2025-01-29T11:48:59.485122206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nf7bv,Uid:a24be675-4908-4650-a410-10e8909b6d34,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:59.488530 kubelet[2493]: E0129 11:48:59.488304 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:59.489401 containerd[1461]: time="2025-01-29T11:48:59.489373512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rrkxl,Uid:9e584d43-93d9-4b1b-b899-66d38266fe33,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:59.954128 kubelet[2493]: E0129 11:48:59.954084 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:59.983605 kubelet[2493]: I0129 11:48:59.983408 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h65zp" podStartSLOduration=6.870668671 podStartE2EDuration="18.98338876s" podCreationTimestamp="2025-01-29 11:48:41 +0000 UTC" firstStartedPulling="2025-01-29 11:48:42.897160292 +0000 UTC m=+7.110385348" lastFinishedPulling="2025-01-29 11:48:55.009880381 +0000 UTC m=+19.223105437" observedRunningTime="2025-01-29 11:48:59.98277188 +0000 UTC m=+24.195996956" watchObservedRunningTime="2025-01-29 11:48:59.98338876 +0000 UTC m=+24.196613816" Jan 29 11:49:00.952426 kubelet[2493]: E0129 11:49:00.952378 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:01.404324 systemd-networkd[1386]: cilium_host: Link UP Jan 29 11:49:01.404544 systemd-networkd[1386]: cilium_net: Link UP Jan 29 11:49:01.404549 systemd-networkd[1386]: cilium_net: Gained carrier Jan 29 11:49:01.404772 systemd-networkd[1386]: cilium_host: Gained carrier Jan 29 11:49:01.405436 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jan 29 11:49:01.521114 systemd-networkd[1386]: cilium_vxlan: Link UP Jan 29 11:49:01.521124 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jan 29 11:49:01.627072 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jan 29 11:49:01.753894 kernel: NET: Registered PF_ALG protocol family Jan 29 11:49:01.954279 kubelet[2493]: E0129 11:49:01.954234 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:02.436914 systemd-networkd[1386]: lxc_health: Link UP Jan 29 11:49:02.443520 systemd-networkd[1386]: lxc_health: Gained carrier Jan 29 11:49:02.621373 systemd-networkd[1386]: lxc5cdbd0be9b51: Link UP Jan 29 11:49:02.632340 kernel: eth0: renamed from tmpb085b Jan 29 11:49:02.644115 systemd-networkd[1386]: lxc7a56e99f70b7: Link UP Jan 29 11:49:02.645314 systemd-networkd[1386]: lxc5cdbd0be9b51: Gained carrier Jan 29 11:49:02.648005 kernel: eth0: renamed from tmp8dc60 Jan 29 11:49:02.653120 systemd-networkd[1386]: lxc7a56e99f70b7: Gained carrier Jan 29 11:49:03.525948 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jan 29 11:49:03.715117 systemd-networkd[1386]: lxc5cdbd0be9b51: Gained IPv6LL Jan 29 11:49:04.099051 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 29 11:49:04.418753 kubelet[2493]: E0129 11:49:04.418703 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:04.420351 systemd-networkd[1386]: lxc7a56e99f70b7: Gained IPv6LL Jan 29 11:49:06.227063 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:56086.service - OpenSSH per-connection server daemon (10.0.0.1:56086). Jan 29 11:49:06.282884 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 56086 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:06.282370 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:06.288221 systemd-logind[1448]: New session 8 of user core. Jan 29 11:49:06.292226 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:49:06.330053 containerd[1461]: time="2025-01-29T11:49:06.329740325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:49:06.330053 containerd[1461]: time="2025-01-29T11:49:06.329822549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:49:06.330053 containerd[1461]: time="2025-01-29T11:49:06.329863106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:06.330558 containerd[1461]: time="2025-01-29T11:49:06.329988281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:06.359120 systemd[1]: Started cri-containerd-b085b0d9e310bff2efab2e831ed3c79e8387dea4214f4ec004cc36edca574493.scope - libcontainer container b085b0d9e310bff2efab2e831ed3c79e8387dea4214f4ec004cc36edca574493. Jan 29 11:49:06.371282 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:49:06.385384 containerd[1461]: time="2025-01-29T11:49:06.385308742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:49:06.385384 containerd[1461]: time="2025-01-29T11:49:06.385352765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:49:06.385384 containerd[1461]: time="2025-01-29T11:49:06.385363435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:06.385496 containerd[1461]: time="2025-01-29T11:49:06.385428176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:06.399889 containerd[1461]: time="2025-01-29T11:49:06.398567411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nf7bv,Uid:a24be675-4908-4650-a410-10e8909b6d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"b085b0d9e310bff2efab2e831ed3c79e8387dea4214f4ec004cc36edca574493\"" Jan 29 11:49:06.399974 kubelet[2493]: E0129 11:49:06.399686 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:06.402175 containerd[1461]: time="2025-01-29T11:49:06.402119545Z" level=info msg="CreateContainer within sandbox \"b085b0d9e310bff2efab2e831ed3c79e8387dea4214f4ec004cc36edca574493\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:49:06.402975 systemd[1]: run-containerd-runc-k8s.io-8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247-runc.3qPuFs.mount: Deactivated successfully. Jan 29 11:49:06.413070 systemd[1]: Started cri-containerd-8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247.scope - libcontainer container 8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247. Jan 29 11:49:06.427261 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:49:06.454819 containerd[1461]: time="2025-01-29T11:49:06.454781411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rrkxl,Uid:9e584d43-93d9-4b1b-b899-66d38266fe33,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247\"" Jan 29 11:49:06.455562 kubelet[2493]: E0129 11:49:06.455529 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:06.457293 containerd[1461]: time="2025-01-29T11:49:06.457242835Z" level=info msg="CreateContainer within sandbox \"8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:49:06.496404 containerd[1461]: time="2025-01-29T11:49:06.496287869Z" level=info msg="CreateContainer within sandbox \"8dc601f9a4a1bb8214c4b2684e623b1077c7c3e17058cd09f34776811b319247\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc00e1eefabff73cd10b6be093a1f4799c16be71bc3cb6fad407d18a4bfe6a5a\"" Jan 29 11:49:06.496972 containerd[1461]: time="2025-01-29T11:49:06.496902936Z" level=info msg="StartContainer for \"bc00e1eefabff73cd10b6be093a1f4799c16be71bc3cb6fad407d18a4bfe6a5a\"" Jan 29 11:49:06.501271 containerd[1461]: time="2025-01-29T11:49:06.501231989Z" level=info msg="CreateContainer within sandbox \"b085b0d9e310bff2efab2e831ed3c79e8387dea4214f4ec004cc36edca574493\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac1cb9f80ff4f5d4a3c5e6281b1d7f4abebbfd7cd3796a1f896a734e3e36eecd\"" Jan 29 11:49:06.502131 containerd[1461]: time="2025-01-29T11:49:06.502022895Z" level=info msg="StartContainer for \"ac1cb9f80ff4f5d4a3c5e6281b1d7f4abebbfd7cd3796a1f896a734e3e36eecd\"" Jan 29 11:49:06.530992 systemd[1]: Started cri-containerd-bc00e1eefabff73cd10b6be093a1f4799c16be71bc3cb6fad407d18a4bfe6a5a.scope - libcontainer container bc00e1eefabff73cd10b6be093a1f4799c16be71bc3cb6fad407d18a4bfe6a5a. Jan 29 11:49:06.534900 systemd[1]: Started cri-containerd-ac1cb9f80ff4f5d4a3c5e6281b1d7f4abebbfd7cd3796a1f896a734e3e36eecd.scope - libcontainer container ac1cb9f80ff4f5d4a3c5e6281b1d7f4abebbfd7cd3796a1f896a734e3e36eecd. Jan 29 11:49:06.575037 containerd[1461]: time="2025-01-29T11:49:06.574937540Z" level=info msg="StartContainer for \"ac1cb9f80ff4f5d4a3c5e6281b1d7f4abebbfd7cd3796a1f896a734e3e36eecd\" returns successfully" Jan 29 11:49:06.582788 containerd[1461]: time="2025-01-29T11:49:06.582668133Z" level=info msg="StartContainer for \"bc00e1eefabff73cd10b6be093a1f4799c16be71bc3cb6fad407d18a4bfe6a5a\" returns successfully" Jan 29 11:49:06.664185 sshd[3723]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:06.668656 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:56086.service: Deactivated successfully. Jan 29 11:49:06.671451 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:49:06.673524 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:49:06.674714 systemd-logind[1448]: Removed session 8. Jan 29 11:49:06.968508 kubelet[2493]: E0129 11:49:06.968317 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:06.971921 kubelet[2493]: E0129 11:49:06.971881 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:06.979794 kubelet[2493]: I0129 11:49:06.979733 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rrkxl" podStartSLOduration=25.979713246 podStartE2EDuration="25.979713246s" podCreationTimestamp="2025-01-29 11:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:49:06.979426928 +0000 UTC m=+31.192651984" watchObservedRunningTime="2025-01-29 11:49:06.979713246 +0000 UTC m=+31.192938302" Jan 29 11:49:06.989161 kubelet[2493]: I0129 11:49:06.989066 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nf7bv" podStartSLOduration=25.989049496 podStartE2EDuration="25.989049496s" podCreationTimestamp="2025-01-29 11:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:49:06.988711692 +0000 UTC m=+31.201936748" watchObservedRunningTime="2025-01-29 11:49:06.989049496 +0000 UTC m=+31.202274552" Jan 29 11:49:07.973544 kubelet[2493]: E0129 11:49:07.973504 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:07.974077 kubelet[2493]: E0129 11:49:07.973591 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:08.975212 kubelet[2493]: E0129 11:49:08.975159 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:08.975212 kubelet[2493]: E0129 11:49:08.975220 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:10.511463 kubelet[2493]: I0129 11:49:10.511394 2493 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:49:10.512072 kubelet[2493]: E0129 11:49:10.512052 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:10.979244 kubelet[2493]: E0129 11:49:10.979187 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:11.678272 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:52830.service - OpenSSH per-connection server daemon (10.0.0.1:52830). Jan 29 11:49:11.719957 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 52830 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:11.721959 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:11.727025 systemd-logind[1448]: New session 9 of user core. Jan 29 11:49:11.732168 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:49:11.872217 sshd[3901]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:11.875322 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:52830.service: Deactivated successfully. Jan 29 11:49:11.877375 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:49:11.879297 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:49:11.880648 systemd-logind[1448]: Removed session 9. Jan 29 11:49:16.884663 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:52832.service - OpenSSH per-connection server daemon (10.0.0.1:52832). Jan 29 11:49:16.923827 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 52832 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:16.925737 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:16.930385 systemd-logind[1448]: New session 10 of user core. Jan 29 11:49:16.942215 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:49:17.075524 sshd[3920]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:17.080857 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:52832.service: Deactivated successfully. Jan 29 11:49:17.083704 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:49:17.084576 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:49:17.085746 systemd-logind[1448]: Removed session 10. Jan 29 11:49:22.086921 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:44798.service - OpenSSH per-connection server daemon (10.0.0.1:44798). Jan 29 11:49:22.119193 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 44798 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:22.120817 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:22.124903 systemd-logind[1448]: New session 11 of user core. Jan 29 11:49:22.132990 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:49:22.238005 sshd[3935]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:22.249642 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:44798.service: Deactivated successfully. Jan 29 11:49:22.251472 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:49:22.253438 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:49:22.254923 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:44810.service - OpenSSH per-connection server daemon (10.0.0.1:44810). Jan 29 11:49:22.256007 systemd-logind[1448]: Removed session 11. Jan 29 11:49:22.289510 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 44810 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:22.291436 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:22.296252 systemd-logind[1448]: New session 12 of user core. Jan 29 11:49:22.307001 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:49:22.509966 sshd[3950]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:22.522068 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:44810.service: Deactivated successfully. Jan 29 11:49:22.524608 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:49:22.526916 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:49:22.543225 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:44824.service - OpenSSH per-connection server daemon (10.0.0.1:44824). Jan 29 11:49:22.544412 systemd-logind[1448]: Removed session 12. Jan 29 11:49:22.572581 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 44824 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:22.574449 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:22.579398 systemd-logind[1448]: New session 13 of user core. Jan 29 11:49:22.593167 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:49:22.709454 sshd[3963]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:22.713902 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:44824.service: Deactivated successfully. Jan 29 11:49:22.716103 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:49:22.716772 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:49:22.717924 systemd-logind[1448]: Removed session 13. Jan 29 11:49:27.726069 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:44840.service - OpenSSH per-connection server daemon (10.0.0.1:44840). Jan 29 11:49:27.758124 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 44840 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:27.759592 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:27.763654 systemd-logind[1448]: New session 14 of user core. Jan 29 11:49:27.773972 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:49:27.924786 sshd[3977]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:27.928471 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:44840.service: Deactivated successfully. Jan 29 11:49:27.930544 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:49:27.931162 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:49:27.932025 systemd-logind[1448]: Removed session 14. Jan 29 11:49:32.937807 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:36894.service - OpenSSH per-connection server daemon (10.0.0.1:36894). Jan 29 11:49:32.973138 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 36894 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:32.975124 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:32.979289 systemd-logind[1448]: New session 15 of user core. Jan 29 11:49:32.989097 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:49:33.102677 sshd[3992]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:33.106473 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:36894.service: Deactivated successfully. Jan 29 11:49:33.108471 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:49:33.109068 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:49:33.109947 systemd-logind[1448]: Removed session 15. Jan 29 11:49:38.117816 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Jan 29 11:49:38.156323 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:38.158624 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:38.163690 systemd-logind[1448]: New session 16 of user core. Jan 29 11:49:38.172110 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:49:38.296675 sshd[4008]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:38.308803 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:36910.service: Deactivated successfully. Jan 29 11:49:38.311992 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:49:38.314390 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:49:38.324337 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:36926.service - OpenSSH per-connection server daemon (10.0.0.1:36926). Jan 29 11:49:38.325699 systemd-logind[1448]: Removed session 16. Jan 29 11:49:38.356801 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 36926 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:38.359071 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:38.364117 systemd-logind[1448]: New session 17 of user core. Jan 29 11:49:38.370967 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:49:38.630333 sshd[4023]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:38.644639 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:36926.service: Deactivated successfully. Jan 29 11:49:38.647104 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:49:38.649410 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:49:38.651629 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:36940.service - OpenSSH per-connection server daemon (10.0.0.1:36940). Jan 29 11:49:38.653264 systemd-logind[1448]: Removed session 17. Jan 29 11:49:38.700787 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 36940 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:38.703041 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:38.711662 systemd-logind[1448]: New session 18 of user core. Jan 29 11:49:38.721255 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:49:40.364179 sshd[4036]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:40.378161 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:36940.service: Deactivated successfully. Jan 29 11:49:40.381894 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:49:40.385316 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:49:40.394371 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Jan 29 11:49:40.396366 systemd-logind[1448]: Removed session 18. Jan 29 11:49:40.426677 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:40.428736 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:40.433461 systemd-logind[1448]: New session 19 of user core. Jan 29 11:49:40.442976 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:49:40.835040 sshd[4056]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:40.845314 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:36946.service: Deactivated successfully. Jan 29 11:49:40.847134 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:49:40.849096 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:49:40.861347 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:36952.service - OpenSSH per-connection server daemon (10.0.0.1:36952). Jan 29 11:49:40.862762 systemd-logind[1448]: Removed session 19. Jan 29 11:49:40.891291 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 36952 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:40.893275 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:40.898083 systemd-logind[1448]: New session 20 of user core. Jan 29 11:49:40.908098 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:49:41.027482 sshd[4069]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:41.032151 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:36952.service: Deactivated successfully. Jan 29 11:49:41.034485 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:49:41.035261 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:49:41.036218 systemd-logind[1448]: Removed session 20. Jan 29 11:49:45.879550 kubelet[2493]: E0129 11:49:45.879500 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:46.040286 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:50546.service - OpenSSH per-connection server daemon (10.0.0.1:50546). Jan 29 11:49:46.076663 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 50546 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:46.078303 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:46.082288 systemd-logind[1448]: New session 21 of user core. Jan 29 11:49:46.091284 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:49:46.200963 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:46.205524 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:50546.service: Deactivated successfully. Jan 29 11:49:46.207595 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:49:46.208306 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:49:46.209464 systemd-logind[1448]: Removed session 21. Jan 29 11:49:51.212961 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:54286.service - OpenSSH per-connection server daemon (10.0.0.1:54286). Jan 29 11:49:51.251288 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 54286 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:51.253640 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:51.258618 systemd-logind[1448]: New session 22 of user core. Jan 29 11:49:51.268034 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:49:51.407000 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:51.410370 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:54286.service: Deactivated successfully. Jan 29 11:49:51.412865 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:49:51.414905 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:49:51.415961 systemd-logind[1448]: Removed session 22. Jan 29 11:49:56.418695 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:54290.service - OpenSSH per-connection server daemon (10.0.0.1:54290). Jan 29 11:49:56.452297 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:49:56.454167 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:56.458408 systemd-logind[1448]: New session 23 of user core. Jan 29 11:49:56.468013 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:49:56.581494 sshd[4117]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:56.587108 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:54290.service: Deactivated successfully. Jan 29 11:49:56.589495 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:49:56.590334 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:49:56.591497 systemd-logind[1448]: Removed session 23. Jan 29 11:50:01.592449 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:47606.service - OpenSSH per-connection server daemon (10.0.0.1:47606). Jan 29 11:50:01.627185 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 47606 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:50:01.629115 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:01.633047 systemd-logind[1448]: New session 24 of user core. Jan 29 11:50:01.642056 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:50:01.753496 sshd[4133]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:01.764572 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:47606.service: Deactivated successfully. Jan 29 11:50:01.767143 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:50:01.769356 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:50:01.771500 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:47610.service - OpenSSH per-connection server daemon (10.0.0.1:47610). Jan 29 11:50:01.772515 systemd-logind[1448]: Removed session 24. Jan 29 11:50:01.819379 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 47610 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:50:01.821160 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:01.826434 systemd-logind[1448]: New session 25 of user core. Jan 29 11:50:01.836051 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:50:03.197204 containerd[1461]: time="2025-01-29T11:50:03.197110999Z" level=info msg="StopContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" with timeout 30 (s)" Jan 29 11:50:03.197661 containerd[1461]: time="2025-01-29T11:50:03.197555424Z" level=info msg="Stop container \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" with signal terminated" Jan 29 11:50:03.225683 systemd[1]: cri-containerd-de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76.scope: Deactivated successfully. Jan 29 11:50:03.248914 containerd[1461]: time="2025-01-29T11:50:03.248283843Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:50:03.250984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76-rootfs.mount: Deactivated successfully. Jan 29 11:50:03.252107 containerd[1461]: time="2025-01-29T11:50:03.252061331Z" level=info msg="StopContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" with timeout 2 (s)" Jan 29 11:50:03.252400 containerd[1461]: time="2025-01-29T11:50:03.252327817Z" level=info msg="Stop container \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" with signal terminated" Jan 29 11:50:03.260127 systemd-networkd[1386]: lxc_health: Link DOWN Jan 29 11:50:03.260548 systemd-networkd[1386]: lxc_health: Lost carrier Jan 29 11:50:03.269066 containerd[1461]: time="2025-01-29T11:50:03.268986233Z" level=info msg="shim disconnected" id=de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76 namespace=k8s.io Jan 29 11:50:03.269066 containerd[1461]: time="2025-01-29T11:50:03.269061035Z" level=warning msg="cleaning up after shim disconnected" id=de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76 namespace=k8s.io Jan 29 11:50:03.269066 containerd[1461]: time="2025-01-29T11:50:03.269070063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:03.288587 systemd[1]: cri-containerd-9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c.scope: Deactivated successfully. Jan 29 11:50:03.289006 systemd[1]: cri-containerd-9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c.scope: Consumed 7.435s CPU time. Jan 29 11:50:03.295625 containerd[1461]: time="2025-01-29T11:50:03.295557407Z" level=info msg="StopContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" returns successfully" Jan 29 11:50:03.301900 containerd[1461]: time="2025-01-29T11:50:03.301818315Z" level=info msg="StopPodSandbox for \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\"" Jan 29 11:50:03.301900 containerd[1461]: time="2025-01-29T11:50:03.301906883Z" level=info msg="Container to stop \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.304556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4-shm.mount: Deactivated successfully. Jan 29 11:50:03.313690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c-rootfs.mount: Deactivated successfully. Jan 29 11:50:03.314637 systemd[1]: cri-containerd-8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4.scope: Deactivated successfully. Jan 29 11:50:03.324799 containerd[1461]: time="2025-01-29T11:50:03.324690959Z" level=info msg="shim disconnected" id=9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c namespace=k8s.io Jan 29 11:50:03.324799 containerd[1461]: time="2025-01-29T11:50:03.324779808Z" level=warning msg="cleaning up after shim disconnected" id=9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c namespace=k8s.io Jan 29 11:50:03.324799 containerd[1461]: time="2025-01-29T11:50:03.324803994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:03.337447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4-rootfs.mount: Deactivated successfully. Jan 29 11:50:03.339113 containerd[1461]: time="2025-01-29T11:50:03.339052490Z" level=info msg="shim disconnected" id=8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4 namespace=k8s.io Jan 29 11:50:03.339257 containerd[1461]: time="2025-01-29T11:50:03.339224167Z" level=warning msg="cleaning up after shim disconnected" id=8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4 namespace=k8s.io Jan 29 11:50:03.339257 containerd[1461]: time="2025-01-29T11:50:03.339241559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:03.347122 containerd[1461]: time="2025-01-29T11:50:03.347068032Z" level=info msg="StopContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" returns successfully" Jan 29 11:50:03.347684 containerd[1461]: time="2025-01-29T11:50:03.347660898Z" level=info msg="StopPodSandbox for \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\"" Jan 29 11:50:03.347739 containerd[1461]: time="2025-01-29T11:50:03.347691577Z" level=info msg="Container to stop \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.347739 containerd[1461]: time="2025-01-29T11:50:03.347703270Z" level=info msg="Container to stop \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.347739 containerd[1461]: time="2025-01-29T11:50:03.347712367Z" level=info msg="Container to stop \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.347739 containerd[1461]: time="2025-01-29T11:50:03.347721955Z" level=info msg="Container to stop \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.347739 containerd[1461]: time="2025-01-29T11:50:03.347730661Z" level=info msg="Container to stop \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:50:03.355779 systemd[1]: cri-containerd-ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73.scope: Deactivated successfully. Jan 29 11:50:03.368067 containerd[1461]: time="2025-01-29T11:50:03.368022582Z" level=info msg="TearDown network for sandbox \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\" successfully" Jan 29 11:50:03.368067 containerd[1461]: time="2025-01-29T11:50:03.368080943Z" level=info msg="StopPodSandbox for \"8c869f5e757a1164b711cfd573130ea65757771b58ba1865ba01f62a2e5e8bb4\" returns successfully" Jan 29 11:50:03.381690 containerd[1461]: time="2025-01-29T11:50:03.381631903Z" level=info msg="shim disconnected" id=ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73 namespace=k8s.io Jan 29 11:50:03.381690 containerd[1461]: time="2025-01-29T11:50:03.381686057Z" level=warning msg="cleaning up after shim disconnected" id=ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73 namespace=k8s.io Jan 29 11:50:03.381690 containerd[1461]: time="2025-01-29T11:50:03.381694012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:03.395400 containerd[1461]: time="2025-01-29T11:50:03.395341236Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:50:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:50:03.396986 containerd[1461]: time="2025-01-29T11:50:03.396940144Z" level=info msg="TearDown network for sandbox \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" successfully" Jan 29 11:50:03.396986 containerd[1461]: time="2025-01-29T11:50:03.396971103Z" level=info msg="StopPodSandbox for \"ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73\" returns successfully" Jan 29 11:50:03.487884 kubelet[2493]: I0129 11:50:03.487683 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7www\" (UniqueName: \"kubernetes.io/projected/12cc4805-c777-4e2b-bd44-8f1f312e752b-kube-api-access-t7www\") pod \"12cc4805-c777-4e2b-bd44-8f1f312e752b\" (UID: \"12cc4805-c777-4e2b-bd44-8f1f312e752b\") " Jan 29 11:50:03.487884 kubelet[2493]: I0129 11:50:03.487727 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12cc4805-c777-4e2b-bd44-8f1f312e752b-cilium-config-path\") pod \"12cc4805-c777-4e2b-bd44-8f1f312e752b\" (UID: \"12cc4805-c777-4e2b-bd44-8f1f312e752b\") " Jan 29 11:50:03.491464 kubelet[2493]: I0129 11:50:03.491428 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cc4805-c777-4e2b-bd44-8f1f312e752b-kube-api-access-t7www" (OuterVolumeSpecName: "kube-api-access-t7www") pod "12cc4805-c777-4e2b-bd44-8f1f312e752b" (UID: "12cc4805-c777-4e2b-bd44-8f1f312e752b"). InnerVolumeSpecName "kube-api-access-t7www". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03.491543 kubelet[2493]: I0129 11:50:03.491512 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12cc4805-c777-4e2b-bd44-8f1f312e752b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12cc4805-c777-4e2b-bd44-8f1f312e752b" (UID: "12cc4805-c777-4e2b-bd44-8f1f312e752b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:03.588798 kubelet[2493]: I0129 11:50:03.588699 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-etc-cni-netd\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.588798 kubelet[2493]: I0129 11:50:03.588769 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fd9ccf1-806b-4052-9f45-56956988db8a-clustermesh-secrets\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.588798 kubelet[2493]: I0129 11:50:03.588812 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-lib-modules\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.588798 kubelet[2493]: I0129 11:50:03.588831 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-kernel\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588883 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-hostproc\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588903 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-config-path\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588922 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-cgroup\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588941 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-hubble-tls\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588960 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cni-path\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589149 kubelet[2493]: I0129 11:50:03.588979 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-run\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.588996 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg74\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-kube-api-access-slg74\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.589013 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-bpf-maps\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.589032 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-xtables-lock\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.589053 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-net\") pod \"7fd9ccf1-806b-4052-9f45-56956988db8a\" (UID: \"7fd9ccf1-806b-4052-9f45-56956988db8a\") " Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.589098 2493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t7www\" (UniqueName: \"kubernetes.io/projected/12cc4805-c777-4e2b-bd44-8f1f312e752b-kube-api-access-t7www\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.589344 kubelet[2493]: I0129 11:50:03.589114 2493 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12cc4805-c777-4e2b-bd44-8f1f312e752b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.589545 kubelet[2493]: I0129 11:50:03.589174 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.589545 kubelet[2493]: I0129 11:50:03.589218 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.589545 kubelet[2493]: I0129 11:50:03.589238 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.589545 kubelet[2493]: I0129 11:50:03.589264 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.590005 kubelet[2493]: I0129 11:50:03.589948 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.590075 kubelet[2493]: I0129 11:50:03.590030 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.590734 kubelet[2493]: I0129 11:50:03.590136 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.590734 kubelet[2493]: I0129 11:50:03.590248 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.590734 kubelet[2493]: I0129 11:50:03.590278 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.592329 kubelet[2493]: I0129 11:50:03.592287 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd9ccf1-806b-4052-9f45-56956988db8a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:50:03.592393 kubelet[2493]: I0129 11:50:03.592352 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:50:03.593071 kubelet[2493]: I0129 11:50:03.593021 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03.593071 kubelet[2493]: I0129 11:50:03.593022 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:50:03.594455 kubelet[2493]: I0129 11:50:03.594411 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-kube-api-access-slg74" (OuterVolumeSpecName: "kube-api-access-slg74") pod "7fd9ccf1-806b-4052-9f45-56956988db8a" (UID: "7fd9ccf1-806b-4052-9f45-56956988db8a"). InnerVolumeSpecName "kube-api-access-slg74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:50:03.689697 kubelet[2493]: I0129 11:50:03.689635 2493 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.689697 kubelet[2493]: I0129 11:50:03.689674 2493 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.689697 kubelet[2493]: I0129 11:50:03.689688 2493 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.689697 kubelet[2493]: I0129 11:50:03.689704 2493 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fd9ccf1-806b-4052-9f45-56956988db8a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689718 2493 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689730 2493 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689740 2493 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689750 2493 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689757 2493 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689767 2493 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689774 2493 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690004 kubelet[2493]: I0129 11:50:03.689805 2493 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690210 kubelet[2493]: I0129 11:50:03.689815 2493 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fd9ccf1-806b-4052-9f45-56956988db8a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.690210 kubelet[2493]: I0129 11:50:03.689824 2493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-slg74\" (UniqueName: \"kubernetes.io/projected/7fd9ccf1-806b-4052-9f45-56956988db8a-kube-api-access-slg74\") on node \"localhost\" DevicePath \"\"" Jan 29 11:50:03.886429 systemd[1]: Removed slice kubepods-burstable-pod7fd9ccf1_806b_4052_9f45_56956988db8a.slice - libcontainer container kubepods-burstable-pod7fd9ccf1_806b_4052_9f45_56956988db8a.slice. Jan 29 11:50:03.886539 systemd[1]: kubepods-burstable-pod7fd9ccf1_806b_4052_9f45_56956988db8a.slice: Consumed 7.541s CPU time. Jan 29 11:50:03.887569 systemd[1]: Removed slice kubepods-besteffort-pod12cc4805_c777_4e2b_bd44_8f1f312e752b.slice - libcontainer container kubepods-besteffort-pod12cc4805_c777_4e2b_bd44_8f1f312e752b.slice. Jan 29 11:50:04.111746 kubelet[2493]: I0129 11:50:04.111707 2493 scope.go:117] "RemoveContainer" containerID="9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c" Jan 29 11:50:04.112997 containerd[1461]: time="2025-01-29T11:50:04.112950710Z" level=info msg="RemoveContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\"" Jan 29 11:50:04.120670 containerd[1461]: time="2025-01-29T11:50:04.120622064Z" level=info msg="RemoveContainer for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" returns successfully" Jan 29 11:50:04.120940 kubelet[2493]: I0129 11:50:04.120915 2493 scope.go:117] "RemoveContainer" containerID="ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14" Jan 29 11:50:04.122457 containerd[1461]: time="2025-01-29T11:50:04.122413727Z" level=info msg="RemoveContainer for \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\"" Jan 29 11:50:04.126187 containerd[1461]: time="2025-01-29T11:50:04.126150346Z" level=info msg="RemoveContainer for \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\" returns successfully" Jan 29 11:50:04.126331 kubelet[2493]: I0129 11:50:04.126298 2493 scope.go:117] "RemoveContainer" containerID="9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091" Jan 29 11:50:04.128546 containerd[1461]: time="2025-01-29T11:50:04.128505951Z" level=info msg="RemoveContainer for \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\"" Jan 29 11:50:04.132390 containerd[1461]: time="2025-01-29T11:50:04.132359932Z" level=info msg="RemoveContainer for \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\" returns successfully" Jan 29 11:50:04.132649 kubelet[2493]: I0129 11:50:04.132585 2493 scope.go:117] "RemoveContainer" containerID="5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1" Jan 29 11:50:04.134288 containerd[1461]: time="2025-01-29T11:50:04.134249161Z" level=info msg="RemoveContainer for \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\"" Jan 29 11:50:04.138613 containerd[1461]: time="2025-01-29T11:50:04.138511108Z" level=info msg="RemoveContainer for \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\" returns successfully" Jan 29 11:50:04.138821 kubelet[2493]: I0129 11:50:04.138750 2493 scope.go:117] "RemoveContainer" containerID="8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a" Jan 29 11:50:04.139999 containerd[1461]: time="2025-01-29T11:50:04.139965200Z" level=info msg="RemoveContainer for \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\"" Jan 29 11:50:04.155215 containerd[1461]: time="2025-01-29T11:50:04.155168982Z" level=info msg="RemoveContainer for \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\" returns successfully" Jan 29 11:50:04.155539 kubelet[2493]: I0129 11:50:04.155478 2493 scope.go:117] "RemoveContainer" containerID="9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c" Jan 29 11:50:04.160968 containerd[1461]: time="2025-01-29T11:50:04.160885882Z" level=error msg="ContainerStatus for \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\": not found" Jan 29 11:50:04.171326 kubelet[2493]: E0129 11:50:04.171266 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\": not found" containerID="9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c" Jan 29 11:50:04.171468 kubelet[2493]: I0129 11:50:04.171318 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c"} err="failed to get container status \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e011aac84720f046ca475e9435d5fbd25c13cc7ecdfba6034d9b65120473c\": not found" Jan 29 11:50:04.171468 kubelet[2493]: I0129 11:50:04.171424 2493 scope.go:117] "RemoveContainer" containerID="ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14" Jan 29 11:50:04.171852 containerd[1461]: time="2025-01-29T11:50:04.171777243Z" level=error msg="ContainerStatus for \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\": not found" Jan 29 11:50:04.172000 kubelet[2493]: E0129 11:50:04.171973 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\": not found" containerID="ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14" Jan 29 11:50:04.172051 kubelet[2493]: I0129 11:50:04.172001 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14"} err="failed to get container status \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed4008edcb85c1c1f07bb1aaaca4b7833a3c64715776338300d5d0398d497e14\": not found" Jan 29 11:50:04.172051 kubelet[2493]: I0129 11:50:04.172027 2493 scope.go:117] "RemoveContainer" containerID="9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091" Jan 29 11:50:04.172260 containerd[1461]: time="2025-01-29T11:50:04.172226657Z" level=error msg="ContainerStatus for \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\": not found" Jan 29 11:50:04.172376 kubelet[2493]: E0129 11:50:04.172348 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\": not found" containerID="9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091" Jan 29 11:50:04.172424 kubelet[2493]: I0129 11:50:04.172375 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091"} err="failed to get container status \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e6b1fefd378a95c109451c9afba61493f1f3b2ddc8b4dd0c7b43cc07c5b5091\": not found" Jan 29 11:50:04.172424 kubelet[2493]: I0129 11:50:04.172401 2493 scope.go:117] "RemoveContainer" containerID="5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1" Jan 29 11:50:04.172662 containerd[1461]: time="2025-01-29T11:50:04.172621437Z" level=error msg="ContainerStatus for \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\": not found" Jan 29 11:50:04.172809 kubelet[2493]: E0129 11:50:04.172752 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\": not found" containerID="5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1" Jan 29 11:50:04.172809 kubelet[2493]: I0129 11:50:04.172778 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1"} err="failed to get container status \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e2b334c831361b009b9a119e5ef37f2b2355df4dc1f4dee2f8c62413affefa1\": not found" Jan 29 11:50:04.172809 kubelet[2493]: I0129 11:50:04.172807 2493 scope.go:117] "RemoveContainer" containerID="8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a" Jan 29 11:50:04.173176 containerd[1461]: time="2025-01-29T11:50:04.173122388Z" level=error msg="ContainerStatus for \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\": not found" Jan 29 11:50:04.173328 kubelet[2493]: E0129 11:50:04.173285 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\": not found" containerID="8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a" Jan 29 11:50:04.173394 kubelet[2493]: I0129 11:50:04.173326 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a"} err="failed to get container status \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dab921024a71326da3e30c3b4333fcdb34e5cbf0e534123f3224837bacad87a\": not found" Jan 29 11:50:04.173394 kubelet[2493]: I0129 11:50:04.173344 2493 scope.go:117] "RemoveContainer" containerID="de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76" Jan 29 11:50:04.174718 containerd[1461]: time="2025-01-29T11:50:04.174672053Z" level=info msg="RemoveContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\"" Jan 29 11:50:04.178972 containerd[1461]: time="2025-01-29T11:50:04.178923409Z" level=info msg="RemoveContainer for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" returns successfully" Jan 29 11:50:04.179214 kubelet[2493]: I0129 11:50:04.179186 2493 scope.go:117] "RemoveContainer" containerID="de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76" Jan 29 11:50:04.179456 containerd[1461]: time="2025-01-29T11:50:04.179420653Z" level=error msg="ContainerStatus for \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\": not found" Jan 29 11:50:04.179640 kubelet[2493]: E0129 11:50:04.179611 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\": not found" containerID="de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76" Jan 29 11:50:04.179680 kubelet[2493]: I0129 11:50:04.179654 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76"} err="failed to get container status \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\": rpc error: code = NotFound desc = an error occurred when try to find container \"de505c67c779e690d98b0c890303d104196259ba99103f6104d11f9a8d4ccf76\": not found" Jan 29 11:50:04.223971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73-rootfs.mount: Deactivated successfully. Jan 29 11:50:04.224085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef367329601c8a8125f4ca77d9703112722738adcb7fcfbd1ee2e8aad9f92d73-shm.mount: Deactivated successfully. Jan 29 11:50:04.224169 systemd[1]: var-lib-kubelet-pods-12cc4805\x2dc777\x2d4e2b\x2dbd44\x2d8f1f312e752b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7www.mount: Deactivated successfully. Jan 29 11:50:04.224258 systemd[1]: var-lib-kubelet-pods-7fd9ccf1\x2d806b\x2d4052\x2d9f45\x2d56956988db8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dslg74.mount: Deactivated successfully. Jan 29 11:50:04.224335 systemd[1]: var-lib-kubelet-pods-7fd9ccf1\x2d806b\x2d4052\x2d9f45\x2d56956988db8a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:50:04.224409 systemd[1]: var-lib-kubelet-pods-7fd9ccf1\x2d806b\x2d4052\x2d9f45\x2d56956988db8a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:50:05.157037 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:05.165244 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:47610.service: Deactivated successfully. Jan 29 11:50:05.167616 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:50:05.169156 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:50:05.179331 systemd[1]: Started sshd@25-10.0.0.21:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Jan 29 11:50:05.180557 systemd-logind[1448]: Removed session 25. Jan 29 11:50:05.207399 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:50:05.209091 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:05.213435 systemd-logind[1448]: New session 26 of user core. Jan 29 11:50:05.222976 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:50:05.757173 sshd[4310]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:05.766984 systemd[1]: sshd@25-10.0.0.21:22-10.0.0.1:47614.service: Deactivated successfully. Jan 29 11:50:05.769073 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769072 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="apply-sysctl-overwrites" Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769097 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="mount-bpf-fs" Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769106 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="clean-cilium-state" Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769112 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="cilium-agent" Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769120 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12cc4805-c777-4e2b-bd44-8f1f312e752b" containerName="cilium-operator" Jan 29 11:50:05.769232 kubelet[2493]: E0129 11:50:05.769130 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="mount-cgroup" Jan 29 11:50:05.769232 kubelet[2493]: I0129 11:50:05.769154 2493 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cc4805-c777-4e2b-bd44-8f1f312e752b" containerName="cilium-operator" Jan 29 11:50:05.769232 kubelet[2493]: I0129 11:50:05.769160 2493 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" containerName="cilium-agent" Jan 29 11:50:05.772410 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:50:05.786349 systemd[1]: Started sshd@26-10.0.0.21:22-10.0.0.1:47616.service - OpenSSH per-connection server daemon (10.0.0.1:47616). Jan 29 11:50:05.791409 systemd-logind[1448]: Removed session 26. Jan 29 11:50:05.796780 systemd[1]: Created slice kubepods-burstable-pod5322ed1d_5c4b_4d0a_826d_757781825fce.slice - libcontainer container kubepods-burstable-pod5322ed1d_5c4b_4d0a_826d_757781825fce.slice. Jan 29 11:50:05.822302 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 47616 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:50:05.823910 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:05.828084 systemd-logind[1448]: New session 27 of user core. Jan 29 11:50:05.836021 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:50:05.878896 kubelet[2493]: E0129 11:50:05.878866 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:05.880756 kubelet[2493]: I0129 11:50:05.880715 2493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12cc4805-c777-4e2b-bd44-8f1f312e752b" path="/var/lib/kubelet/pods/12cc4805-c777-4e2b-bd44-8f1f312e752b/volumes" Jan 29 11:50:05.881373 kubelet[2493]: I0129 11:50:05.881353 2493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd9ccf1-806b-4052-9f45-56956988db8a" path="/var/lib/kubelet/pods/7fd9ccf1-806b-4052-9f45-56956988db8a/volumes" Jan 29 11:50:05.888624 sshd[4323]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:05.899502 systemd[1]: sshd@26-10.0.0.21:22-10.0.0.1:47616.service: Deactivated successfully. Jan 29 11:50:05.901091 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:50:05.902498 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:50:05.903780 systemd[1]: Started sshd@27-10.0.0.21:22-10.0.0.1:47630.service - OpenSSH per-connection server daemon (10.0.0.1:47630). Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904125 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-host-proc-sys-net\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904160 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5322ed1d-5c4b-4d0a-826d-757781825fce-hubble-tls\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904180 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5322ed1d-5c4b-4d0a-826d-757781825fce-cilium-config-path\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904195 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-host-proc-sys-kernel\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904215 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-etc-cni-netd\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904414 kubelet[2493]: I0129 11:50:05.904229 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-cilium-run\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904243 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-lib-modules\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904257 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5322ed1d-5c4b-4d0a-826d-757781825fce-clustermesh-secrets\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904272 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-cilium-cgroup\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904284 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-cni-path\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904308 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gxw\" (UniqueName: \"kubernetes.io/projected/5322ed1d-5c4b-4d0a-826d-757781825fce-kube-api-access-k7gxw\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904662 kubelet[2493]: I0129 11:50:05.904323 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-bpf-maps\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904816 kubelet[2493]: I0129 11:50:05.904335 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-hostproc\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904816 kubelet[2493]: I0129 11:50:05.904349 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5322ed1d-5c4b-4d0a-826d-757781825fce-xtables-lock\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904816 kubelet[2493]: I0129 11:50:05.904362 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5322ed1d-5c4b-4d0a-826d-757781825fce-cilium-ipsec-secrets\") pod \"cilium-hmrwd\" (UID: \"5322ed1d-5c4b-4d0a-826d-757781825fce\") " pod="kube-system/cilium-hmrwd" Jan 29 11:50:05.904704 systemd-logind[1448]: Removed session 27. Jan 29 11:50:05.934379 kubelet[2493]: E0129 11:50:05.934338 2493 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:50:05.948314 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 47630 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:50:05.949999 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:05.953894 systemd-logind[1448]: New session 28 of user core. Jan 29 11:50:05.964949 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:50:06.100765 kubelet[2493]: E0129 11:50:06.100603 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:06.102009 containerd[1461]: time="2025-01-29T11:50:06.101260199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmrwd,Uid:5322ed1d-5c4b-4d0a-826d-757781825fce,Namespace:kube-system,Attempt:0,}" Jan 29 11:50:06.128716 containerd[1461]: time="2025-01-29T11:50:06.128593514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:50:06.128716 containerd[1461]: time="2025-01-29T11:50:06.128671832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:50:06.128716 containerd[1461]: time="2025-01-29T11:50:06.128687191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:50:06.128965 containerd[1461]: time="2025-01-29T11:50:06.128814423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:50:06.150650 systemd[1]: Started cri-containerd-7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6.scope - libcontainer container 7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6. Jan 29 11:50:06.183927 containerd[1461]: time="2025-01-29T11:50:06.183863306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmrwd,Uid:5322ed1d-5c4b-4d0a-826d-757781825fce,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\"" Jan 29 11:50:06.184999 kubelet[2493]: E0129 11:50:06.184965 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:06.190425 containerd[1461]: time="2025-01-29T11:50:06.190225355Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:50:06.210893 containerd[1461]: time="2025-01-29T11:50:06.210825165Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7\"" Jan 29 11:50:06.212535 containerd[1461]: time="2025-01-29T11:50:06.212456582Z" level=info msg="StartContainer for \"cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7\"" Jan 29 11:50:06.246007 systemd[1]: Started cri-containerd-cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7.scope - libcontainer container cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7. Jan 29 11:50:06.274599 containerd[1461]: time="2025-01-29T11:50:06.274541625Z" level=info msg="StartContainer for \"cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7\" returns successfully" Jan 29 11:50:06.284847 systemd[1]: cri-containerd-cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7.scope: Deactivated successfully. Jan 29 11:50:06.321760 containerd[1461]: time="2025-01-29T11:50:06.321684649Z" level=info msg="shim disconnected" id=cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7 namespace=k8s.io Jan 29 11:50:06.321760 containerd[1461]: time="2025-01-29T11:50:06.321752307Z" level=warning msg="cleaning up after shim disconnected" id=cddb80b5ab8000b22fcb11dff258eb28c3ccfdbfc07eaf9edf18e33763aaf9a7 namespace=k8s.io Jan 29 11:50:06.321760 containerd[1461]: time="2025-01-29T11:50:06.321762506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:07.122935 kubelet[2493]: E0129 11:50:07.122896 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:07.124695 containerd[1461]: time="2025-01-29T11:50:07.124645483Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:50:07.141124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636170533.mount: Deactivated successfully. Jan 29 11:50:07.144627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178654568.mount: Deactivated successfully. Jan 29 11:50:07.149339 containerd[1461]: time="2025-01-29T11:50:07.149290109Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997\"" Jan 29 11:50:07.149855 containerd[1461]: time="2025-01-29T11:50:07.149812440Z" level=info msg="StartContainer for \"57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997\"" Jan 29 11:50:07.192970 systemd[1]: Started cri-containerd-57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997.scope - libcontainer container 57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997. Jan 29 11:50:07.220104 containerd[1461]: time="2025-01-29T11:50:07.220060923Z" level=info msg="StartContainer for \"57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997\" returns successfully" Jan 29 11:50:07.227135 systemd[1]: cri-containerd-57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997.scope: Deactivated successfully. Jan 29 11:50:07.249694 containerd[1461]: time="2025-01-29T11:50:07.249631158Z" level=info msg="shim disconnected" id=57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997 namespace=k8s.io Jan 29 11:50:07.249694 containerd[1461]: time="2025-01-29T11:50:07.249692604Z" level=warning msg="cleaning up after shim disconnected" id=57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997 namespace=k8s.io Jan 29 11:50:07.249694 containerd[1461]: time="2025-01-29T11:50:07.249702974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:07.649171 kubelet[2493]: I0129 11:50:07.649110 2493 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:50:07Z","lastTransitionTime":"2025-01-29T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:50:07.879136 kubelet[2493]: E0129 11:50:07.879094 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:07.879317 kubelet[2493]: E0129 11:50:07.879209 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:08.010089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57e10a33e3280536d5d1235ce24f0fd4f7e21e412f0ac1615cff398c396aa997-rootfs.mount: Deactivated successfully. Jan 29 11:50:08.126968 kubelet[2493]: E0129 11:50:08.126330 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:08.128638 containerd[1461]: time="2025-01-29T11:50:08.128509268Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:50:08.146907 containerd[1461]: time="2025-01-29T11:50:08.146802709Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5\"" Jan 29 11:50:08.147442 containerd[1461]: time="2025-01-29T11:50:08.147377150Z" level=info msg="StartContainer for \"748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5\"" Jan 29 11:50:08.183020 systemd[1]: Started cri-containerd-748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5.scope - libcontainer container 748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5. Jan 29 11:50:08.214818 containerd[1461]: time="2025-01-29T11:50:08.214687100Z" level=info msg="StartContainer for \"748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5\" returns successfully" Jan 29 11:50:08.217190 systemd[1]: cri-containerd-748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5.scope: Deactivated successfully. Jan 29 11:50:08.244904 containerd[1461]: time="2025-01-29T11:50:08.244805920Z" level=info msg="shim disconnected" id=748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5 namespace=k8s.io Jan 29 11:50:08.244904 containerd[1461]: time="2025-01-29T11:50:08.244896903Z" level=warning msg="cleaning up after shim disconnected" id=748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5 namespace=k8s.io Jan 29 11:50:08.245119 containerd[1461]: time="2025-01-29T11:50:08.244910649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:09.010412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-748af4cb983c780ca52a9c6fce6ac074f66fd7fcae09fb97d039e34258bd63c5-rootfs.mount: Deactivated successfully. Jan 29 11:50:09.130317 kubelet[2493]: E0129 11:50:09.130286 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:09.135193 containerd[1461]: time="2025-01-29T11:50:09.135147349Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:50:09.153399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228576457.mount: Deactivated successfully. Jan 29 11:50:09.157556 containerd[1461]: time="2025-01-29T11:50:09.157501876Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6\"" Jan 29 11:50:09.163485 containerd[1461]: time="2025-01-29T11:50:09.162277825Z" level=info msg="StartContainer for \"a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6\"" Jan 29 11:50:09.225976 systemd[1]: Started cri-containerd-a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6.scope - libcontainer container a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6. Jan 29 11:50:09.248315 systemd[1]: cri-containerd-a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6.scope: Deactivated successfully. Jan 29 11:50:09.250091 containerd[1461]: time="2025-01-29T11:50:09.250052502Z" level=info msg="StartContainer for \"a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6\" returns successfully" Jan 29 11:50:09.275243 containerd[1461]: time="2025-01-29T11:50:09.275054671Z" level=info msg="shim disconnected" id=a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6 namespace=k8s.io Jan 29 11:50:09.275243 containerd[1461]: time="2025-01-29T11:50:09.275121708Z" level=warning msg="cleaning up after shim disconnected" id=a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6 namespace=k8s.io Jan 29 11:50:09.275243 containerd[1461]: time="2025-01-29T11:50:09.275132289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:50:10.010409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ed2d4064df9ac4fbc57d387a0ae6649a63bb66fc15d15a8353eda0024774e6-rootfs.mount: Deactivated successfully. Jan 29 11:50:10.134286 kubelet[2493]: E0129 11:50:10.134252 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:10.136182 containerd[1461]: time="2025-01-29T11:50:10.136125492Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:50:10.156673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244949889.mount: Deactivated successfully. Jan 29 11:50:10.162182 containerd[1461]: time="2025-01-29T11:50:10.162131523Z" level=info msg="CreateContainer within sandbox \"7b1a0f58d631fab65244ef3b19897a3b0a6dbd3fe7b9896302700045987026c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875\"" Jan 29 11:50:10.162667 containerd[1461]: time="2025-01-29T11:50:10.162635478Z" level=info msg="StartContainer for \"046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875\"" Jan 29 11:50:10.193970 systemd[1]: Started cri-containerd-046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875.scope - libcontainer container 046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875. Jan 29 11:50:10.223916 containerd[1461]: time="2025-01-29T11:50:10.223868255Z" level=info msg="StartContainer for \"046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875\" returns successfully" Jan 29 11:50:10.632869 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:50:10.878904 kubelet[2493]: E0129 11:50:10.878815 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-nf7bv" podUID="a24be675-4908-4650-a410-10e8909b6d34" Jan 29 11:50:11.139092 kubelet[2493]: E0129 11:50:11.139059 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:11.151601 kubelet[2493]: I0129 11:50:11.151551 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hmrwd" podStartSLOduration=6.151538802 podStartE2EDuration="6.151538802s" podCreationTimestamp="2025-01-29 11:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:50:11.151468809 +0000 UTC m=+95.364693865" watchObservedRunningTime="2025-01-29 11:50:11.151538802 +0000 UTC m=+95.364763858" Jan 29 11:50:12.143088 kubelet[2493]: E0129 11:50:12.142736 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:12.240692 systemd[1]: run-containerd-runc-k8s.io-046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875-runc.1WwB6u.mount: Deactivated successfully. Jan 29 11:50:12.878394 kubelet[2493]: E0129 11:50:12.878345 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:13.782740 systemd-networkd[1386]: lxc_health: Link UP Jan 29 11:50:13.783269 systemd-networkd[1386]: lxc_health: Gained carrier Jan 29 11:50:14.103272 kubelet[2493]: E0129 11:50:14.102683 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:14.147949 kubelet[2493]: E0129 11:50:14.147900 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:15.153638 kubelet[2493]: E0129 11:50:15.153602 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:50:15.715103 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 29 11:50:16.464616 systemd[1]: run-containerd-runc-k8s.io-046f9c22caa35d3e3faa62c506e8d565f033edd3f877bd90282cb6fd9b6b3875-runc.XaurSg.mount: Deactivated successfully. Jan 29 11:50:20.741432 sshd[4331]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:20.746705 systemd[1]: sshd@27-10.0.0.21:22-10.0.0.1:47630.service: Deactivated successfully. Jan 29 11:50:20.749799 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:50:20.750608 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:50:20.751553 systemd-logind[1448]: Removed session 28.