Jan 13 20:36:23.889350 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:36:23.889378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:36:23.889393 kernel: BIOS-provided physical RAM map: Jan 13 20:36:23.889402 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:36:23.889410 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:36:23.889419 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:36:23.889428 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:36:23.889437 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:36:23.889446 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:36:23.889457 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:36:23.889466 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:36:23.889474 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:36:23.889483 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:36:23.889491 kernel: NX (Execute Disable) protection: active Jan 13 20:36:23.889502 kernel: APIC: Static calls initialized Jan 13 20:36:23.889514 kernel: SMBIOS 2.8 present. Jan 13 20:36:23.889524 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:36:23.889533 kernel: Hypervisor detected: KVM Jan 13 20:36:23.889542 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:36:23.889551 kernel: kvm-clock: using sched offset of 2290848979 cycles Jan 13 20:36:23.889561 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:36:23.889570 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:36:23.889580 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:36:23.889589 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:36:23.889599 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:36:23.889612 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:36:23.889622 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:36:23.889631 kernel: Using GB pages for direct mapping Jan 13 20:36:23.889640 kernel: ACPI: Early table checksum verification disabled Jan 13 20:36:23.889649 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:36:23.889658 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889667 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889676 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889689 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:36:23.889698 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889707 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889716 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889726 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:36:23.889735 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:36:23.889744 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:36:23.889759 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:36:23.889794 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:36:23.889805 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:36:23.889817 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:36:23.889829 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:36:23.889840 kernel: No NUMA configuration found Jan 13 20:36:23.889851 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:36:23.889862 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:36:23.889875 kernel: Zone ranges: Jan 13 20:36:23.889885 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:36:23.889895 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:36:23.889905 kernel: Normal empty Jan 13 20:36:23.889915 kernel: Movable zone start for each node Jan 13 20:36:23.889924 kernel: Early memory node ranges Jan 13 20:36:23.889933 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:36:23.889943 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:36:23.889953 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:36:23.889966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:36:23.889976 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:36:23.889986 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:36:23.889996 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:36:23.890006 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:36:23.890017 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:36:23.890027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:36:23.890038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:36:23.890048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:36:23.890073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:36:23.890084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:36:23.890094 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:36:23.890105 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:36:23.890116 kernel: TSC deadline timer available Jan 13 20:36:23.890126 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:36:23.890136 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:36:23.890146 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:36:23.890156 kernel: kvm-guest: setup PV sched yield Jan 13 20:36:23.890166 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:36:23.890180 kernel: Booting paravirtualized kernel on KVM Jan 13 20:36:23.890191 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:36:23.890202 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:36:23.890213 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:36:23.890224 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:36:23.890234 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:36:23.890243 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:36:23.890253 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:36:23.890265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:36:23.890280 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:36:23.890290 kernel: random: crng init done Jan 13 20:36:23.890300 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:36:23.890311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:36:23.890321 kernel: Fallback order for Node 0: 0 Jan 13 20:36:23.890331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:36:23.890341 kernel: Policy zone: DMA32 Jan 13 20:36:23.890351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:36:23.890366 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Jan 13 20:36:23.890376 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:36:23.890386 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:36:23.890397 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:36:23.890407 kernel: Dynamic Preempt: voluntary Jan 13 20:36:23.890418 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:36:23.890429 kernel: rcu: RCU event tracing is enabled. Jan 13 20:36:23.890439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:36:23.890450 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:36:23.890465 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:36:23.890476 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:36:23.890486 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:36:23.890496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:36:23.890506 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:36:23.890516 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:36:23.890526 kernel: Console: colour VGA+ 80x25 Jan 13 20:36:23.890536 kernel: printk: console [ttyS0] enabled Jan 13 20:36:23.890546 kernel: ACPI: Core revision 20230628 Jan 13 20:36:23.890558 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:36:23.890565 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:36:23.890572 kernel: x2apic enabled Jan 13 20:36:23.890580 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:36:23.890587 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:36:23.890594 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:36:23.890601 kernel: kvm-guest: setup PV IPIs Jan 13 20:36:23.890619 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:36:23.890629 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:36:23.890638 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:36:23.890648 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:36:23.890656 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:36:23.890666 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:36:23.890673 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:36:23.890681 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:36:23.890688 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:36:23.890696 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:36:23.890706 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:36:23.890716 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:36:23.890727 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:36:23.890737 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:36:23.890747 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:36:23.890758 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:36:23.890780 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:36:23.890788 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:36:23.890798 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:36:23.890806 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:36:23.890813 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:36:23.890821 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:36:23.890828 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:36:23.890836 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:36:23.890843 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:36:23.890850 kernel: landlock: Up and running. Jan 13 20:36:23.890858 kernel: SELinux: Initializing. Jan 13 20:36:23.890868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:36:23.890876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:36:23.890883 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:36:23.890893 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:36:23.890904 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:36:23.890913 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:36:23.890923 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:36:23.890933 kernel: ... version: 0 Jan 13 20:36:23.890943 kernel: ... bit width: 48 Jan 13 20:36:23.890956 kernel: ... generic registers: 6 Jan 13 20:36:23.890967 kernel: ... value mask: 0000ffffffffffff Jan 13 20:36:23.890977 kernel: ... max period: 00007fffffffffff Jan 13 20:36:23.890986 kernel: ... fixed-purpose events: 0 Jan 13 20:36:23.890996 kernel: ... event mask: 000000000000003f Jan 13 20:36:23.891005 kernel: signal: max sigframe size: 1776 Jan 13 20:36:23.891015 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:36:23.891025 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:36:23.891036 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:36:23.891049 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:36:23.891067 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:36:23.891078 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:36:23.891088 kernel: smpboot: Max logical packages: 1 Jan 13 20:36:23.891099 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:36:23.891109 kernel: devtmpfs: initialized Jan 13 20:36:23.891120 kernel: x86/mm: Memory block size: 128MB Jan 13 20:36:23.891130 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:36:23.891141 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:36:23.891155 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:36:23.891165 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:36:23.891176 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:36:23.891186 kernel: audit: type=2000 audit(1736800583.888:1): state=initialized audit_enabled=0 res=1 Jan 13 20:36:23.891196 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:36:23.891206 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:36:23.891216 kernel: cpuidle: using governor menu Jan 13 20:36:23.891226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:36:23.891236 kernel: dca service started, version 1.12.1 Jan 13 20:36:23.891250 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:36:23.891260 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:36:23.891271 kernel: PCI: Using configuration type 1 for base access Jan 13 20:36:23.891281 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:36:23.891292 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:36:23.891309 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:36:23.891320 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:36:23.891330 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:36:23.891340 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:36:23.891355 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:36:23.891366 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:36:23.891376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:36:23.891386 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:36:23.891396 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:36:23.891406 kernel: ACPI: Interpreter enabled Jan 13 20:36:23.891416 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:36:23.891426 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:36:23.891499 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:36:23.891514 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:36:23.891524 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:36:23.891534 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:36:23.891764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:36:23.892090 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:36:23.892258 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:36:23.892273 kernel: PCI host bridge to bus 0000:00 Jan 13 20:36:23.892535 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:36:23.892673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:36:23.892870 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:36:23.893003 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:36:23.893151 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:36:23.893287 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:36:23.893410 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:36:23.893551 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:36:23.893685 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:36:23.893837 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:36:23.893958 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:36:23.894108 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:36:23.894260 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:36:23.894415 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:36:23.894584 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:36:23.894733 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:36:23.894884 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:36:23.895019 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:36:23.895168 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:36:23.895321 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:36:23.895445 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:36:23.895583 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:36:23.895726 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:36:23.895881 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:36:23.896101 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:36:23.896230 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:36:23.896358 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:36:23.896485 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:36:23.896612 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:36:23.896757 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:36:23.896933 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:36:23.897096 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:36:23.897218 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:36:23.897229 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:36:23.897241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:36:23.897249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:36:23.897257 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:36:23.897264 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:36:23.897272 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:36:23.897279 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:36:23.897287 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:36:23.897294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:36:23.897302 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:36:23.897312 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:36:23.897319 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:36:23.897327 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:36:23.897335 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:36:23.897342 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:36:23.897349 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:36:23.897357 kernel: iommu: Default domain type: Translated Jan 13 20:36:23.897365 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:36:23.897372 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:36:23.897382 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:36:23.897389 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:36:23.897397 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:36:23.897515 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:36:23.897639 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:36:23.897790 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:36:23.897802 kernel: vgaarb: loaded Jan 13 20:36:23.897809 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:36:23.897821 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:36:23.897829 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:36:23.897836 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:36:23.897844 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:36:23.897852 kernel: pnp: PnP ACPI init Jan 13 20:36:23.897984 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:36:23.897995 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:36:23.898003 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:36:23.898014 kernel: NET: Registered PF_INET protocol family Jan 13 20:36:23.898022 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:36:23.898029 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:36:23.898037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:36:23.898045 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:36:23.898052 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:36:23.898067 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:36:23.898075 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:36:23.898083 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:36:23.898093 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:36:23.898101 kernel: NET: Registered PF_XDP protocol family Jan 13 20:36:23.898214 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:36:23.898325 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:36:23.898436 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:36:23.898546 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:36:23.898666 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:36:23.898853 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:36:23.898869 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:36:23.898877 kernel: Initialise system trusted keyrings Jan 13 20:36:23.898884 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:36:23.898892 kernel: Key type asymmetric registered Jan 13 20:36:23.898900 kernel: Asymmetric key parser 'x509' registered Jan 13 20:36:23.898907 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:36:23.898915 kernel: io scheduler mq-deadline registered Jan 13 20:36:23.898923 kernel: io scheduler kyber registered Jan 13 20:36:23.898931 kernel: io scheduler bfq registered Jan 13 20:36:23.898939 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:36:23.898949 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:36:23.898957 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:36:23.898965 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:36:23.898973 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:36:23.898980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:36:23.898988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:36:23.898996 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:36:23.899004 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:36:23.899143 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:36:23.899261 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:36:23.899271 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 20:36:23.899382 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:36:23 UTC (1736800583) Jan 13 20:36:23.899493 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:36:23.899503 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:36:23.899511 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:36:23.899519 kernel: Segment Routing with IPv6 Jan 13 20:36:23.899530 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:36:23.899537 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:36:23.899545 kernel: Key type dns_resolver registered Jan 13 20:36:23.899553 kernel: IPI shorthand broadcast: enabled Jan 13 20:36:23.899560 kernel: sched_clock: Marking stable (563003053, 105560215)->(726735779, -58172511) Jan 13 20:36:23.899568 kernel: registered taskstats version 1 Jan 13 20:36:23.899576 kernel: Loading compiled-in X.509 certificates Jan 13 20:36:23.899584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:36:23.899592 kernel: Key type .fscrypt registered Jan 13 20:36:23.899601 kernel: Key type fscrypt-provisioning registered Jan 13 20:36:23.899615 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:36:23.899625 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:36:23.899632 kernel: ima: No architecture policies found Jan 13 20:36:23.899640 kernel: clk: Disabling unused clocks Jan 13 20:36:23.899648 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:36:23.899655 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:36:23.899663 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:36:23.899671 kernel: Run /init as init process Jan 13 20:36:23.899682 kernel: with arguments: Jan 13 20:36:23.899689 kernel: /init Jan 13 20:36:23.899697 kernel: with environment: Jan 13 20:36:23.899704 kernel: HOME=/ Jan 13 20:36:23.899712 kernel: TERM=linux Jan 13 20:36:23.899719 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:36:23.899729 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:36:23.899739 systemd[1]: Detected virtualization kvm. Jan 13 20:36:23.899749 systemd[1]: Detected architecture x86-64. Jan 13 20:36:23.899757 systemd[1]: Running in initrd. Jan 13 20:36:23.899765 systemd[1]: No hostname configured, using default hostname. Jan 13 20:36:23.899784 systemd[1]: Hostname set to . Jan 13 20:36:23.899793 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:36:23.899801 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:36:23.899809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:36:23.899817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:36:23.899829 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:36:23.899849 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:36:23.899859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:36:23.899868 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:36:23.899878 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:36:23.899889 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:36:23.899897 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:36:23.899906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:36:23.899914 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:36:23.899922 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:36:23.899930 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:36:23.899939 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:36:23.899947 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:36:23.899958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:36:23.899966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:36:23.899975 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:36:23.899983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:36:23.899992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:36:23.900000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:36:23.900008 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:36:23.900017 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:36:23.900025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:36:23.900036 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:36:23.900044 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:36:23.900053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:36:23.900068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:36:23.900076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:36:23.900085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:36:23.900093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:36:23.900102 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:36:23.900134 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 20:36:23.900156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:36:23.900167 systemd-journald[192]: Journal started Jan 13 20:36:23.900187 systemd-journald[192]: Runtime Journal (/run/log/journal/d5610cb903744546a63a9f5dddef60ef) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:36:23.894695 systemd-modules-load[193]: Inserted module 'overlay' Jan 13 20:36:23.929235 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:36:23.929252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:36:23.929264 kernel: Bridge firewalling registered Jan 13 20:36:23.921435 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 13 20:36:23.929533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:36:23.933401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:36:23.947965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:36:23.956644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:36:23.957425 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:36:23.967976 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:36:23.979984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:36:23.985595 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:36:23.988611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:36:23.991145 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:36:23.994147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:36:24.007957 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:36:24.011547 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:36:24.019233 dracut-cmdline[228]: dracut-dracut-053 Jan 13 20:36:24.021684 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:36:24.055535 systemd-resolved[233]: Positive Trust Anchors: Jan 13 20:36:24.055554 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:36:24.055590 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:36:24.067589 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 13 20:36:24.069680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:36:24.069857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:36:24.095803 kernel: SCSI subsystem initialized Jan 13 20:36:24.104792 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:36:24.115794 kernel: iscsi: registered transport (tcp) Jan 13 20:36:24.136026 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:36:24.136065 kernel: QLogic iSCSI HBA Driver Jan 13 20:36:24.182259 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:36:24.191946 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:36:24.214970 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:36:24.215003 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:36:24.216031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:36:24.259803 kernel: raid6: avx2x4 gen() 21373 MB/s Jan 13 20:36:24.276802 kernel: raid6: avx2x2 gen() 21432 MB/s Jan 13 20:36:24.293948 kernel: raid6: avx2x1 gen() 19612 MB/s Jan 13 20:36:24.293967 kernel: raid6: using algorithm avx2x2 gen() 21432 MB/s Jan 13 20:36:24.312030 kernel: raid6: .... xor() 16191 MB/s, rmw enabled Jan 13 20:36:24.312071 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:36:24.334803 kernel: xor: automatically using best checksumming function avx Jan 13 20:36:24.493821 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:36:24.507462 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:36:24.524041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:36:24.535681 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 20:36:24.540063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:36:24.550924 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:36:24.564817 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 13 20:36:24.606862 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:36:24.628009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:36:24.688407 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:36:24.699926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:36:24.711626 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:36:24.715583 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:36:24.719290 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:36:24.739953 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:36:24.740160 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:36:24.740174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:36:24.740187 kernel: GPT:9289727 != 19775487 Jan 13 20:36:24.740199 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:36:24.740217 kernel: GPT:9289727 != 19775487 Jan 13 20:36:24.740228 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:36:24.740240 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:36:24.724025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:36:24.726118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:36:24.737947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:36:24.761893 kernel: libata version 3.00 loaded. Jan 13 20:36:24.761943 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:36:24.756419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:36:24.766782 kernel: AES CTR mode by8 optimization enabled Jan 13 20:36:24.773159 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:36:24.777222 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:36:24.799365 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:36:24.799383 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:36:24.799536 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Jan 13 20:36:24.799548 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:36:24.799685 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 13 20:36:24.799696 kernel: scsi host0: ahci Jan 13 20:36:24.799876 kernel: scsi host1: ahci Jan 13 20:36:24.800025 kernel: scsi host2: ahci Jan 13 20:36:24.800182 kernel: scsi host3: ahci Jan 13 20:36:24.800326 kernel: scsi host4: ahci Jan 13 20:36:24.800468 kernel: scsi host5: ahci Jan 13 20:36:24.800619 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:36:24.800631 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:36:24.800641 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:36:24.800654 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:36:24.800664 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:36:24.800674 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:36:24.798804 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:36:24.817246 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:36:24.819890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:36:24.827445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:36:24.842892 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:36:24.845354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:36:24.845413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:36:24.849475 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:36:24.851914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:36:24.851972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:36:24.855152 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:36:24.860858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:36:24.908525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:36:24.920911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:36:24.945560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:36:25.047836 disk-uuid[552]: Primary Header is updated. Jan 13 20:36:25.047836 disk-uuid[552]: Secondary Entries is updated. Jan 13 20:36:25.047836 disk-uuid[552]: Secondary Header is updated. Jan 13 20:36:25.051433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:36:25.054796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:36:25.112459 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:36:25.112527 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:36:25.112546 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:36:25.112560 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:36:25.113792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:36:25.115193 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:36:25.116042 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:36:25.117171 kernel: ata3.00: applying bridge limits Jan 13 20:36:25.117195 kernel: ata3.00: configured for UDMA/100 Jan 13 20:36:25.117786 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:36:25.173813 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:36:25.187468 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:36:25.187487 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:36:26.074738 disk-uuid[566]: The operation has completed successfully. Jan 13 20:36:26.076448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:36:26.100377 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:36:26.100493 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:36:26.139017 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:36:26.141988 sh[593]: Success Jan 13 20:36:26.163803 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:36:26.195074 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:36:26.223077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:36:26.226047 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:36:26.236123 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:36:26.236146 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:36:26.236157 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:36:26.237896 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:36:26.237915 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:36:26.242019 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:36:26.242552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:36:26.260917 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:36:26.263482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:36:26.282556 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:36:26.282588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:36:26.282603 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:36:26.285795 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:36:26.293601 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:36:26.295408 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:36:26.367053 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:36:26.384897 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:36:26.405905 systemd-networkd[771]: lo: Link UP Jan 13 20:36:26.405914 systemd-networkd[771]: lo: Gained carrier Jan 13 20:36:26.408658 systemd-networkd[771]: Enumeration completed Jan 13 20:36:26.408746 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:36:26.409545 systemd[1]: Reached target network.target - Network. Jan 13 20:36:26.413401 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:36:26.413408 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:36:26.417423 systemd-networkd[771]: eth0: Link UP Jan 13 20:36:26.417432 systemd-networkd[771]: eth0: Gained carrier Jan 13 20:36:26.417438 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:36:26.434807 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:36:26.512932 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:36:26.530954 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:36:26.573000 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.79 Jan 13 20:36:26.573009 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jan 13 20:36:26.617295 ignition[776]: Ignition 2.20.0 Jan 13 20:36:26.617308 ignition[776]: Stage: fetch-offline Jan 13 20:36:26.617343 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:26.617353 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:26.617445 ignition[776]: parsed url from cmdline: "" Jan 13 20:36:26.617449 ignition[776]: no config URL provided Jan 13 20:36:26.617454 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:36:26.617462 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:36:26.617490 ignition[776]: op(1): [started] loading QEMU firmware config module Jan 13 20:36:26.617495 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:36:26.627253 ignition[776]: op(1): [finished] loading QEMU firmware config module Jan 13 20:36:26.666030 ignition[776]: parsing config with SHA512: 7bc50856840b2bbca9a0ef3dceba7ee049907e15fa5a0ba860d9769f6b389d5a63fcfbb9c24329fa2e7e79a68f0384b680d611b7d65705f751a3e47a7f8f887e Jan 13 20:36:26.669448 unknown[776]: fetched base config from "system" Jan 13 20:36:26.669459 unknown[776]: fetched user config from "qemu" Jan 13 20:36:26.669790 ignition[776]: fetch-offline: fetch-offline passed Jan 13 20:36:26.669851 ignition[776]: Ignition finished successfully Jan 13 20:36:26.672113 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:36:26.673538 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:36:26.685900 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:36:26.697498 ignition[786]: Ignition 2.20.0 Jan 13 20:36:26.697509 ignition[786]: Stage: kargs Jan 13 20:36:26.697658 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:26.697669 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:26.715021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:36:26.698445 ignition[786]: kargs: kargs passed Jan 13 20:36:26.698486 ignition[786]: Ignition finished successfully Jan 13 20:36:26.729900 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:36:26.740903 ignition[795]: Ignition 2.20.0 Jan 13 20:36:26.740914 ignition[795]: Stage: disks Jan 13 20:36:26.743755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:36:26.741066 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:26.747162 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:36:26.741077 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:26.748672 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:36:26.741858 ignition[795]: disks: disks passed Jan 13 20:36:26.750802 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:36:26.741900 ignition[795]: Ignition finished successfully Jan 13 20:36:26.751825 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:36:26.753643 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:36:26.767899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:36:26.795505 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:36:27.022252 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:36:27.041986 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:36:27.138808 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:36:27.139717 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:36:27.141210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:36:27.153862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:36:27.155488 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:36:27.156655 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:36:27.163313 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Jan 13 20:36:27.163331 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:36:27.163347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:36:27.156691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:36:27.169071 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:36:27.169089 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:36:27.156711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:36:27.163688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:36:27.170320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:36:27.172999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:36:27.208720 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:36:27.213695 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:36:27.218365 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:36:27.222475 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:36:27.328261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:36:27.342076 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:36:27.345861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:36:27.350548 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:36:27.351851 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:36:27.377412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:36:27.400655 ignition[930]: INFO : Ignition 2.20.0 Jan 13 20:36:27.400655 ignition[930]: INFO : Stage: mount Jan 13 20:36:27.402503 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:27.402503 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:27.402503 ignition[930]: INFO : mount: mount passed Jan 13 20:36:27.402503 ignition[930]: INFO : Ignition finished successfully Jan 13 20:36:27.408207 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:36:27.420019 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:36:27.428214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:36:27.438819 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 13 20:36:27.440924 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:36:27.440949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:36:27.440960 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:36:27.444794 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:36:27.445766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:36:27.468500 ignition[959]: INFO : Ignition 2.20.0 Jan 13 20:36:27.468500 ignition[959]: INFO : Stage: files Jan 13 20:36:27.470375 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:27.470375 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:27.473020 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:36:27.474298 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:36:27.474298 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:36:27.478416 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:36:27.480199 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:36:27.480199 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:36:27.479237 unknown[959]: wrote ssh authorized keys file for user: core Jan 13 20:36:27.484667 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:36:27.484667 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:36:27.519081 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:36:27.629613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:36:27.629613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:36:27.633882 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:36:28.113752 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:36:28.198966 systemd-networkd[771]: eth0: Gained IPv6LL Jan 13 20:36:28.208760 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:36:28.221362 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:36:28.223550 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:36:28.225598 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:36:28.227707 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:36:28.229725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:36:28.231975 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:36:28.234093 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:36:28.236176 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:36:28.238710 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:36:28.247702 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:36:28.249455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:36:28.252006 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:36:28.254503 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:36:28.256926 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:36:28.559642 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:36:28.948066 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:36:28.948066 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:36:28.952391 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:36:28.955004 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:36:28.955004 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:36:28.955004 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:36:28.959401 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:36:28.961302 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:36:28.961302 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:36:28.961302 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:36:28.985041 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:36:28.991072 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:36:28.992681 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:36:28.992681 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:36:28.992681 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:36:28.992681 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:36:28.992681 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:36:28.992681 ignition[959]: INFO : files: files passed Jan 13 20:36:28.992681 ignition[959]: INFO : Ignition finished successfully Jan 13 20:36:29.003660 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:36:29.013961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:36:29.017265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:36:29.019986 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:36:29.021005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:36:29.026868 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:36:29.030879 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:36:29.030879 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:36:29.035599 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:36:29.034012 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:36:29.035857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:36:29.052947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:36:29.074162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:36:29.074283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:36:29.076550 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:36:29.078683 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:36:29.080691 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:36:29.081396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:36:29.097592 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:36:29.108940 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:36:29.118693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:36:29.118907 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:36:29.119267 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:36:29.194258 ignition[1014]: INFO : Ignition 2.20.0 Jan 13 20:36:29.194258 ignition[1014]: INFO : Stage: umount Jan 13 20:36:29.194258 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:36:29.194258 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:36:29.194258 ignition[1014]: INFO : umount: umount passed Jan 13 20:36:29.194258 ignition[1014]: INFO : Ignition finished successfully Jan 13 20:36:29.119593 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:36:29.119723 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:36:29.120300 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:36:29.120661 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:36:29.121201 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:36:29.121540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:36:29.122095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:36:29.122454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:36:29.122824 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:36:29.123199 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:36:29.123554 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:36:29.124105 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:36:29.124423 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:36:29.124551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:36:29.125356 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:36:29.125734 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:36:29.126066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:36:29.126178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:36:29.126622 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:36:29.126745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:36:29.127335 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:36:29.127453 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:36:29.128158 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:36:29.128423 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:36:29.131835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:36:29.132205 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:36:29.132550 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:36:29.133105 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:36:29.133212 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:36:29.133663 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:36:29.133780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:36:29.134209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:36:29.134337 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:36:29.134753 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:36:29.134898 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:36:29.135978 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:36:29.136284 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:36:29.136427 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:36:29.137375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:36:29.137681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:36:29.137844 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:36:29.138328 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:36:29.138460 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:36:29.142980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:36:29.143116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:36:29.153745 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:36:29.153901 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:36:29.154367 systemd[1]: Stopped target network.target - Network. Jan 13 20:36:29.154661 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:36:29.154718 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:36:29.155136 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:36:29.155188 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:36:29.155609 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:36:29.155660 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:36:29.175423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:36:29.175480 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:36:29.176207 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:36:29.176564 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:36:29.181111 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:36:29.188433 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:36:29.188547 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:36:29.191027 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 13 20:36:29.192484 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:36:29.192563 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:36:29.194446 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:36:29.194568 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:36:29.197523 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:36:29.197583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:36:29.205887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:36:29.207092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:36:29.207147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:36:29.210287 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:36:29.210337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:36:29.213039 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:36:29.213087 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:36:29.215951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:36:29.227020 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:36:29.227154 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:36:29.235036 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:36:29.235267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:36:29.237485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:36:29.237545 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:36:29.239705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:36:29.239747 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:36:29.242219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:36:29.242271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:36:29.244826 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:36:29.244875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:36:29.247092 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:36:29.247150 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:36:29.259041 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:36:29.261440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:36:29.261512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:36:29.263835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:36:29.263896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:36:29.267979 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:36:29.268127 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:36:29.563587 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:36:29.563727 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:36:29.573988 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:36:29.576150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:36:29.576211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:36:29.589040 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:36:29.597736 systemd[1]: Switching root. Jan 13 20:36:29.637616 systemd-journald[192]: Journal stopped Jan 13 20:36:31.217404 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 20:36:31.217478 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:36:31.217496 kernel: SELinux: policy capability open_perms=1 Jan 13 20:36:31.217508 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:36:31.217525 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:36:31.217536 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:36:31.217547 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:36:31.217563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:36:31.217591 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:36:31.217603 kernel: audit: type=1403 audit(1736800590.383:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:36:31.217616 systemd[1]: Successfully loaded SELinux policy in 38.653ms. Jan 13 20:36:31.217641 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.439ms. Jan 13 20:36:31.217654 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:36:31.217666 systemd[1]: Detected virtualization kvm. Jan 13 20:36:31.217678 systemd[1]: Detected architecture x86-64. Jan 13 20:36:31.217689 systemd[1]: Detected first boot. Jan 13 20:36:31.217701 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:36:31.217715 zram_generator::config[1058]: No configuration found. Jan 13 20:36:31.217728 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:36:31.217740 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:36:31.217752 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:36:31.217764 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:36:31.217836 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:36:31.217857 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:36:31.217883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:36:31.217902 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:36:31.217916 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:36:31.217930 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:36:31.217945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:36:31.217960 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:36:31.217974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:36:31.217989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:36:31.218003 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:36:31.218018 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:36:31.218035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:36:31.218049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:36:31.218070 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:36:31.218084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:36:31.218098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:36:31.218112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:36:31.218129 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:36:31.218146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:36:31.218160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:36:31.218174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:36:31.218188 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:36:31.218202 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:36:31.218216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:36:31.218230 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:36:31.218244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:36:31.218258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:36:31.218272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:36:31.218288 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:36:31.218302 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:36:31.218316 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:36:31.218330 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:36:31.218344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:31.218358 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:36:31.218381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:36:31.218399 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:36:31.218418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:36:31.218431 systemd[1]: Reached target machines.target - Containers. Jan 13 20:36:31.218444 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:36:31.218456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:36:31.218468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:36:31.218481 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:36:31.218493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:36:31.218505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:36:31.218516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:36:31.218530 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:36:31.218542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:36:31.218561 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:36:31.218582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:36:31.218595 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:36:31.218607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:36:31.218618 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:36:31.218630 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:36:31.218646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:36:31.218657 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:36:31.218669 kernel: loop: module loaded Jan 13 20:36:31.218681 kernel: fuse: init (API version 7.39) Jan 13 20:36:31.218692 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:36:31.218705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:36:31.218719 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:36:31.218731 systemd[1]: Stopped verity-setup.service. Jan 13 20:36:31.218743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:31.218758 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:36:31.218820 systemd-journald[1121]: Collecting audit messages is disabled. Jan 13 20:36:31.218847 kernel: ACPI: bus type drm_connector registered Jan 13 20:36:31.218860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:36:31.218885 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:36:31.218898 systemd-journald[1121]: Journal started Jan 13 20:36:31.218924 systemd-journald[1121]: Runtime Journal (/run/log/journal/d5610cb903744546a63a9f5dddef60ef) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:36:30.971410 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:36:30.991558 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:36:30.992136 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:36:31.222843 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:36:31.224939 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:36:31.226443 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:36:31.228988 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:36:31.231287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:36:31.233282 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:36:31.233505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:36:31.235275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:36:31.235487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:36:31.237734 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:36:31.238022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:36:31.240366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:36:31.240726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:36:31.243669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:36:31.243934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:36:31.245693 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:36:31.245951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:36:31.247659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:36:31.249733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:36:31.252425 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:36:31.275051 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:36:31.283273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:36:31.298326 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:36:31.300835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:36:31.300985 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:36:31.304110 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:36:31.307502 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:36:31.310641 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:36:31.312260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:36:31.317077 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:36:31.324466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:36:31.327207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:36:31.329419 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:36:31.336213 systemd-journald[1121]: Time spent on flushing to /var/log/journal/d5610cb903744546a63a9f5dddef60ef is 19.187ms for 951 entries. Jan 13 20:36:31.336213 systemd-journald[1121]: System Journal (/var/log/journal/d5610cb903744546a63a9f5dddef60ef) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:36:31.495962 systemd-journald[1121]: Received client request to flush runtime journal. Jan 13 20:36:31.496010 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 20:36:31.496028 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:36:31.337641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:36:31.339957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:36:31.344474 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:36:31.351693 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:36:31.354973 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:36:31.360420 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:36:31.362501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:36:31.364827 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:36:31.393062 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:36:31.400380 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:36:31.402370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:36:31.416119 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:36:31.467370 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:36:31.472027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:36:31.487755 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:36:31.501988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:36:31.508482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:36:31.523196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:36:31.541207 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:36:31.587230 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 20:36:31.587259 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 20:36:31.597518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:36:31.638810 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:36:31.673022 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:36:31.674479 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:36:31.706836 kernel: loop3: detected capacity change from 0 to 211296 Jan 13 20:36:31.714797 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:36:31.724787 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:36:31.733082 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:36:31.733634 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 20:36:31.737289 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:36:31.737385 systemd[1]: Reloading... Jan 13 20:36:31.793820 zram_generator::config[1224]: No configuration found. Jan 13 20:36:31.871487 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:36:31.909259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:36:31.958182 systemd[1]: Reloading finished in 220 ms. Jan 13 20:36:31.991867 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:36:31.996290 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:36:32.012026 systemd[1]: Starting ensure-sysext.service... Jan 13 20:36:32.017984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:36:32.023029 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:36:32.023045 systemd[1]: Reloading... Jan 13 20:36:32.041825 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:36:32.042186 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:36:32.045899 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:36:32.046201 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:36:32.046272 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:36:32.049938 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:36:32.050021 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:36:32.066807 zram_generator::config[1287]: No configuration found. Jan 13 20:36:32.069072 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:36:32.069210 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:36:32.229624 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:36:32.279081 systemd[1]: Reloading finished in 255 ms. Jan 13 20:36:32.297959 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:36:32.310189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:36:32.318218 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:36:32.320885 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:36:32.323246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:36:32.328108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:36:32.331471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:36:32.337075 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:36:32.341631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.341826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:36:32.342960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:36:32.346764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:36:32.350301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:36:32.352081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:36:32.356648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:36:32.357823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.359262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:36:32.360013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:36:32.361950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:36:32.362239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:36:32.365432 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 13 20:36:32.366532 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:36:32.366716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:36:32.373198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:36:32.373405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:36:32.376090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.376297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:36:32.383092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:36:32.388633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:36:32.393157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:36:32.395262 augenrules[1360]: No rules Jan 13 20:36:32.395929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:36:32.396049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.397135 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:36:32.397353 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:36:32.399015 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:36:32.401322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:36:32.403505 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:36:32.405435 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:36:32.407400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:36:32.407608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:36:32.410505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:36:32.410689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:36:32.413326 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:36:32.413704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:36:32.428444 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:36:32.438382 systemd[1]: Finished ensure-sysext.service. Jan 13 20:36:32.440378 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.450217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:36:32.452254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:36:32.454950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:36:32.458966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:36:32.462793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1372) Jan 13 20:36:32.465136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:36:32.469823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:36:32.477514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:36:32.479942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:36:32.485137 augenrules[1396]: /sbin/augenrules: No change Jan 13 20:36:32.485086 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:36:32.486711 systemd-resolved[1329]: Positive Trust Anchors: Jan 13 20:36:32.486733 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:36:32.488002 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:36:32.490925 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:36:32.492078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:36:32.492115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:36:32.492735 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 13 20:36:32.492899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:36:32.495399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:36:32.496870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:36:32.499169 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:36:32.499350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:36:32.502140 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:36:32.502322 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:36:32.507919 augenrules[1424]: No rules Jan 13 20:36:32.511895 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:36:32.518754 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:36:32.520575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:36:32.520765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:36:32.533046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:36:32.537337 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:36:32.543070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:36:32.546440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:36:32.558928 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:36:32.561055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:36:32.561112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:36:32.581799 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:36:32.599010 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:36:32.603620 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:36:32.603955 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:36:32.587136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:36:32.599152 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:36:32.600654 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:36:32.602054 systemd-networkd[1412]: lo: Link UP Jan 13 20:36:32.602059 systemd-networkd[1412]: lo: Gained carrier Jan 13 20:36:32.614975 systemd-networkd[1412]: Enumeration completed Jan 13 20:36:32.615371 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:36:32.615375 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:36:32.616087 systemd-networkd[1412]: eth0: Link UP Jan 13 20:36:32.616092 systemd-networkd[1412]: eth0: Gained carrier Jan 13 20:36:32.616103 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:36:32.616416 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:36:32.624872 systemd[1]: Reached target network.target - Network. Jan 13 20:36:32.626840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 13 20:36:32.630815 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:36:32.631444 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jan 13 20:36:33.633991 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:36:33.630160 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:36:33.630199 systemd-timesyncd[1415]: Initial clock synchronization to Mon 2025-01-13 20:36:33.630069 UTC. Jan 13 20:36:33.630443 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 13 20:36:33.632042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:36:33.636915 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:36:33.639053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:36:33.734865 kernel: kvm_amd: TSC scaling supported Jan 13 20:36:33.734987 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:36:33.735027 kernel: kvm_amd: Nested Paging enabled Jan 13 20:36:33.735039 kernel: kvm_amd: LBR virtualization supported Jan 13 20:36:33.735052 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:36:33.735068 kernel: kvm_amd: Virtual GIF supported Jan 13 20:36:33.743245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:36:33.757849 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:36:33.787052 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:36:33.800971 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:36:33.808924 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:36:33.839061 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:36:33.841435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:36:33.842625 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:36:33.843850 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:36:33.845157 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:36:33.846641 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:36:33.848095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:36:33.849415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:36:33.850696 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:36:33.850728 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:36:33.851683 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:36:33.853263 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:36:33.856087 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:36:33.868393 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:36:33.870739 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:36:33.872378 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:36:33.873602 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:36:33.874603 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:36:33.875610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:36:33.875636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:36:33.876597 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:36:33.878705 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:36:33.881926 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:36:33.880921 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:36:33.887056 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:36:33.888680 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:36:33.890937 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:36:33.894745 jq[1465]: false Jan 13 20:36:33.895959 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:36:33.899049 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:36:33.904732 extend-filesystems[1466]: Found loop3 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found loop4 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found loop5 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found sr0 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda1 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda2 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda3 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found usr Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda4 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda6 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda7 Jan 13 20:36:33.905703 extend-filesystems[1466]: Found vda9 Jan 13 20:36:33.905703 extend-filesystems[1466]: Checking size of /dev/vda9 Jan 13 20:36:33.906566 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:36:33.907148 dbus-daemon[1464]: [system] SELinux support is enabled Jan 13 20:36:33.916100 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:36:33.918488 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:36:33.919057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:36:33.919790 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:36:33.929652 extend-filesystems[1466]: Resized partition /dev/vda9 Jan 13 20:36:33.922956 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:36:33.936561 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:36:33.924763 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:36:33.927888 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:36:33.949260 jq[1482]: true Jan 13 20:36:33.931142 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:36:33.931340 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:36:33.931653 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:36:33.949724 jq[1489]: true Jan 13 20:36:33.931862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:36:33.934847 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:36:33.935052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:36:33.947634 (ntainerd)[1490]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:36:33.960438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1379) Jan 13 20:36:33.960499 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:36:33.958953 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:36:33.958980 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:36:33.962805 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:36:33.962841 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:36:33.976035 update_engine[1481]: I20250113 20:36:33.975950 1481 main.cc:92] Flatcar Update Engine starting Jan 13 20:36:33.977009 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:36:33.977101 update_engine[1481]: I20250113 20:36:33.977057 1481 update_check_scheduler.cc:74] Next update check in 2m30s Jan 13 20:36:34.020009 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:36:34.068491 tar[1488]: linux-amd64/helm Jan 13 20:36:34.082536 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 20:36:34.082562 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:36:34.084242 systemd-logind[1479]: New seat seat0. Jan 13 20:36:34.085841 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:36:34.088595 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:36:34.197115 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:36:34.525962 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:36:34.525962 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:36:34.525962 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:36:34.530188 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Jan 13 20:36:34.531236 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:36:34.531504 containerd[1490]: time="2025-01-13T20:36:34.526574323Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:36:34.532279 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:36:34.532528 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:36:34.536015 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:36:34.561099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:36:34.564610 containerd[1490]: time="2025-01-13T20:36:34.564473318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.566582 containerd[1490]: time="2025-01-13T20:36:34.566536016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:36:34.566582 containerd[1490]: time="2025-01-13T20:36:34.566572124Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:36:34.566641 containerd[1490]: time="2025-01-13T20:36:34.566591280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:36:34.566920 containerd[1490]: time="2025-01-13T20:36:34.566808196Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:36:34.566920 containerd[1490]: time="2025-01-13T20:36:34.566879510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.566968 containerd[1490]: time="2025-01-13T20:36:34.566953318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:36:34.566968 containerd[1490]: time="2025-01-13T20:36:34.566964720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567170 containerd[1490]: time="2025-01-13T20:36:34.567150448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567170 containerd[1490]: time="2025-01-13T20:36:34.567167640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567216 containerd[1490]: time="2025-01-13T20:36:34.567180715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567216 containerd[1490]: time="2025-01-13T20:36:34.567189912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567320 containerd[1490]: time="2025-01-13T20:36:34.567299768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567594 containerd[1490]: time="2025-01-13T20:36:34.567572810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567754 containerd[1490]: time="2025-01-13T20:36:34.567713264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:36:34.567754 containerd[1490]: time="2025-01-13T20:36:34.567732870Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:36:34.567868 containerd[1490]: time="2025-01-13T20:36:34.567844189Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:36:34.567928 containerd[1490]: time="2025-01-13T20:36:34.567911776Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:36:34.570110 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:36:34.572283 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Jan 13 20:36:34.577111 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:36:34.577429 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:36:34.582268 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:36:34.597251 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:36:34.608353 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:36:34.613950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:36:34.615441 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:36:34.684431 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:34.685807 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:34.695484 systemd-logind[1479]: New session 1 of user core. Jan 13 20:36:34.696241 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:36:34.702068 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:36:34.707189 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:36:34.709687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:36:34.710479 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732344246Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732413165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732445936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732462087Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732474279Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:36:34.732624 containerd[1490]: time="2025-01-13T20:36:34.732628549Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:36:34.733520 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733016085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733122906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733137002Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733149576Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733163322Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733176015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733187116Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733200191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733219347Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733231429Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733242670Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733253811Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733272306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.733921 containerd[1490]: time="2025-01-13T20:36:34.733289127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733300769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733312461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733324744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733336677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733348068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733359509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733371362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733384146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733395126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733406848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733417598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733430162Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733447374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733459357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734170 containerd[1490]: time="2025-01-13T20:36:34.733469175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733530290Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733546230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733556709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733566828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733576326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.734476 containerd[1490]: time="2025-01-13T20:36:34.733587277Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:36:34.735537 containerd[1490]: time="2025-01-13T20:36:34.734875873Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:36:34.735537 containerd[1490]: time="2025-01-13T20:36:34.734897824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:36:34.735586 containerd[1490]: time="2025-01-13T20:36:34.735147713Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:36:34.735586 containerd[1490]: time="2025-01-13T20:36:34.735195903Z" level=info msg="Connect containerd service" Jan 13 20:36:34.735586 containerd[1490]: time="2025-01-13T20:36:34.735224948Z" level=info msg="using legacy CRI server" Jan 13 20:36:34.735586 containerd[1490]: time="2025-01-13T20:36:34.735231059Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:36:34.735586 containerd[1490]: time="2025-01-13T20:36:34.735328412Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:36:34.735990 containerd[1490]: time="2025-01-13T20:36:34.735920061Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:36:34.736088 containerd[1490]: time="2025-01-13T20:36:34.736054393Z" level=info msg="Start subscribing containerd event" Jan 13 20:36:34.736123 containerd[1490]: time="2025-01-13T20:36:34.736107804Z" level=info msg="Start recovering state" Jan 13 20:36:34.736256 containerd[1490]: time="2025-01-13T20:36:34.736238789Z" level=info msg="Start event monitor" Jan 13 20:36:34.736300 containerd[1490]: time="2025-01-13T20:36:34.736285667Z" level=info msg="Start snapshots syncer" Jan 13 20:36:34.736300 containerd[1490]: time="2025-01-13T20:36:34.736298792Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:36:34.736337 containerd[1490]: time="2025-01-13T20:36:34.736306857Z" level=info msg="Start streaming server" Jan 13 20:36:34.736997 containerd[1490]: time="2025-01-13T20:36:34.736916520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:36:34.736997 containerd[1490]: time="2025-01-13T20:36:34.736979779Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:36:34.737055 containerd[1490]: time="2025-01-13T20:36:34.737033379Z" level=info msg="containerd successfully booted in 0.289137s" Jan 13 20:36:34.748134 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:36:34.749316 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:36:34.757105 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:36:34.765044 systemd-networkd[1412]: eth0: Gained IPv6LL Jan 13 20:36:34.768413 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:36:34.775093 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:36:34.778110 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:36:34.785272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:36:34.787684 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:36:34.809179 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:36:34.809438 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:36:34.810902 tar[1488]: linux-amd64/LICENSE Jan 13 20:36:34.810930 tar[1488]: linux-amd64/README.md Jan 13 20:36:34.811095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:36:34.821949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:36:34.823579 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:36:34.878461 systemd[1557]: Queued start job for default target default.target. Jan 13 20:36:34.888056 systemd[1557]: Created slice app.slice - User Application Slice. Jan 13 20:36:34.888082 systemd[1557]: Reached target paths.target - Paths. Jan 13 20:36:34.888096 systemd[1557]: Reached target timers.target - Timers. Jan 13 20:36:34.889538 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:36:34.900411 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:36:34.900564 systemd[1557]: Reached target sockets.target - Sockets. Jan 13 20:36:34.900585 systemd[1557]: Reached target basic.target - Basic System. Jan 13 20:36:34.900645 systemd[1557]: Reached target default.target - Main User Target. Jan 13 20:36:34.900684 systemd[1557]: Startup finished in 137ms. Jan 13 20:36:34.900815 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:36:34.903345 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:36:34.962754 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Jan 13 20:36:35.007786 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:35.009161 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:35.012873 systemd-logind[1479]: New session 2 of user core. Jan 13 20:36:35.023931 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:36:35.078972 sshd[1590]: Connection closed by 10.0.0.1 port 51016 Jan 13 20:36:35.079474 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:35.086516 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:51016.service: Deactivated successfully. Jan 13 20:36:35.088607 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:36:35.090194 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:36:35.097156 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:51018.service - OpenSSH per-connection server daemon (10.0.0.1:51018). Jan 13 20:36:35.099935 systemd-logind[1479]: Removed session 2. Jan 13 20:36:35.138453 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 51018 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:35.139964 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:35.143867 systemd-logind[1479]: New session 3 of user core. Jan 13 20:36:35.154933 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:36:35.210109 sshd[1597]: Connection closed by 10.0.0.1 port 51018 Jan 13 20:36:35.210411 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:35.214477 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:51018.service: Deactivated successfully. Jan 13 20:36:35.216256 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:36:35.216796 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:36:35.217946 systemd-logind[1479]: Removed session 3. Jan 13 20:36:35.408615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:36:35.410332 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:36:35.413777 systemd[1]: Startup finished in 699ms (kernel) + 6.686s (initrd) + 4.069s (userspace) = 11.455s. Jan 13 20:36:35.424251 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:36:35.900128 kubelet[1606]: E0113 20:36:35.900039 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:36:35.904718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:36:35.904939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:36:45.223145 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:54122.service - OpenSSH per-connection server daemon (10.0.0.1:54122). Jan 13 20:36:45.266479 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 54122 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:45.267907 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.272952 systemd-logind[1479]: New session 4 of user core. Jan 13 20:36:45.286972 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:36:45.340202 sshd[1622]: Connection closed by 10.0.0.1 port 54122 Jan 13 20:36:45.340653 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.349490 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:54122.service: Deactivated successfully. Jan 13 20:36:45.351282 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:36:45.352845 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:36:45.367041 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:54124.service - OpenSSH per-connection server daemon (10.0.0.1:54124). Jan 13 20:36:45.368077 systemd-logind[1479]: Removed session 4. Jan 13 20:36:45.404889 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 54124 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:45.406138 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.409846 systemd-logind[1479]: New session 5 of user core. Jan 13 20:36:45.422925 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:36:45.472030 sshd[1629]: Connection closed by 10.0.0.1 port 54124 Jan 13 20:36:45.472494 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.481165 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:54124.service: Deactivated successfully. Jan 13 20:36:45.482718 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:36:45.484317 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:36:45.485459 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:54134.service - OpenSSH per-connection server daemon (10.0.0.1:54134). Jan 13 20:36:45.486209 systemd-logind[1479]: Removed session 5. Jan 13 20:36:45.530661 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 54134 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:45.532051 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.535491 systemd-logind[1479]: New session 6 of user core. Jan 13 20:36:45.552927 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:36:45.606141 sshd[1636]: Connection closed by 10.0.0.1 port 54134 Jan 13 20:36:45.606470 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.613492 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:54134.service: Deactivated successfully. Jan 13 20:36:45.615292 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:36:45.616837 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:36:45.626100 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:54150.service - OpenSSH per-connection server daemon (10.0.0.1:54150). Jan 13 20:36:45.626886 systemd-logind[1479]: Removed session 6. Jan 13 20:36:45.666109 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 54150 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:45.667724 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.671403 systemd-logind[1479]: New session 7 of user core. Jan 13 20:36:45.680923 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:36:45.737445 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:36:45.737799 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:36:45.754008 sudo[1644]: pam_unix(sudo:session): session closed for user root Jan 13 20:36:45.755373 sshd[1643]: Connection closed by 10.0.0.1 port 54150 Jan 13 20:36:45.755738 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.766362 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:54150.service: Deactivated successfully. Jan 13 20:36:45.768052 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:36:45.769555 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:36:45.776082 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:54156.service - OpenSSH per-connection server daemon (10.0.0.1:54156). Jan 13 20:36:45.776813 systemd-logind[1479]: Removed session 7. Jan 13 20:36:45.814369 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 54156 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:45.815696 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.818943 systemd-logind[1479]: New session 8 of user core. Jan 13 20:36:45.828921 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:36:45.880890 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:36:45.881222 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:36:45.884485 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 13 20:36:45.890330 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:36:45.890653 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:36:45.909072 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:36:45.909744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:36:45.911282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:36:45.937033 augenrules[1678]: No rules Jan 13 20:36:45.938053 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:36:45.938331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:36:45.940918 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 13 20:36:45.942314 sshd[1651]: Connection closed by 10.0.0.1 port 54156 Jan 13 20:36:45.942661 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.958660 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:54156.service: Deactivated successfully. Jan 13 20:36:45.960392 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:36:45.962225 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:36:45.963300 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:54166.service - OpenSSH per-connection server daemon (10.0.0.1:54166). Jan 13 20:36:45.964064 systemd-logind[1479]: Removed session 8. Jan 13 20:36:46.006096 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:36:46.007543 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:46.011177 systemd-logind[1479]: New session 9 of user core. Jan 13 20:36:46.023968 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:36:46.074788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:36:46.076810 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:36:46.077157 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:36:46.079727 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:36:46.122913 kubelet[1695]: E0113 20:36:46.122792 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:36:46.130122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:36:46.130323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:36:46.329020 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:36:46.329163 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:36:46.570959 dockerd[1723]: time="2025-01-13T20:36:46.570890566Z" level=info msg="Starting up" Jan 13 20:36:48.309029 dockerd[1723]: time="2025-01-13T20:36:48.308960126Z" level=info msg="Loading containers: start." Jan 13 20:36:48.788845 kernel: Initializing XFRM netlink socket Jan 13 20:36:48.870302 systemd-networkd[1412]: docker0: Link UP Jan 13 20:36:48.917410 dockerd[1723]: time="2025-01-13T20:36:48.917358040Z" level=info msg="Loading containers: done." Jan 13 20:36:48.932339 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3627639597-merged.mount: Deactivated successfully. Jan 13 20:36:48.938492 dockerd[1723]: time="2025-01-13T20:36:48.938423675Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:36:48.938584 dockerd[1723]: time="2025-01-13T20:36:48.938559730Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:36:48.938733 dockerd[1723]: time="2025-01-13T20:36:48.938708800Z" level=info msg="Daemon has completed initialization" Jan 13 20:36:49.048141 dockerd[1723]: time="2025-01-13T20:36:49.048046838Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:36:49.048311 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:36:55.139160 containerd[1490]: time="2025-01-13T20:36:55.139110376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:36:56.305546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:36:56.319124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:36:56.457567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:36:56.462892 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:36:57.011060 kubelet[1936]: E0113 20:36:57.011006 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:36:57.016012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:36:57.016402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:36:57.030686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479821181.mount: Deactivated successfully. Jan 13 20:36:58.650442 containerd[1490]: time="2025-01-13T20:36:58.650334365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:58.654299 containerd[1490]: time="2025-01-13T20:36:58.654166451Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:36:58.657289 containerd[1490]: time="2025-01-13T20:36:58.656080130Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:58.660085 containerd[1490]: time="2025-01-13T20:36:58.660037732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:58.661497 containerd[1490]: time="2025-01-13T20:36:58.661431526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.522269051s" Jan 13 20:36:58.661497 containerd[1490]: time="2025-01-13T20:36:58.661479836Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:36:58.688590 containerd[1490]: time="2025-01-13T20:36:58.688529915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:37:01.371023 containerd[1490]: time="2025-01-13T20:37:01.370905517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:01.379511 containerd[1490]: time="2025-01-13T20:37:01.379463446Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:37:01.399302 containerd[1490]: time="2025-01-13T20:37:01.399265011Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:01.410513 containerd[1490]: time="2025-01-13T20:37:01.410463652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:01.411539 containerd[1490]: time="2025-01-13T20:37:01.411509524Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.722935206s" Jan 13 20:37:01.411539 containerd[1490]: time="2025-01-13T20:37:01.411534060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:37:01.445764 containerd[1490]: time="2025-01-13T20:37:01.445713799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:37:03.599330 containerd[1490]: time="2025-01-13T20:37:03.599249998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:03.608917 containerd[1490]: time="2025-01-13T20:37:03.608841325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:37:03.617245 containerd[1490]: time="2025-01-13T20:37:03.617195262Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:03.630482 containerd[1490]: time="2025-01-13T20:37:03.630441754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:03.631554 containerd[1490]: time="2025-01-13T20:37:03.631508714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.185755532s" Jan 13 20:37:03.631554 containerd[1490]: time="2025-01-13T20:37:03.631544712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:37:03.658850 containerd[1490]: time="2025-01-13T20:37:03.658586786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:37:05.805271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162146118.mount: Deactivated successfully. Jan 13 20:37:06.787326 containerd[1490]: time="2025-01-13T20:37:06.787263187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:06.788069 containerd[1490]: time="2025-01-13T20:37:06.787998965Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:37:06.789285 containerd[1490]: time="2025-01-13T20:37:06.789256579Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:06.791079 containerd[1490]: time="2025-01-13T20:37:06.791031158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:06.791639 containerd[1490]: time="2025-01-13T20:37:06.791608391Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 3.132981188s" Jan 13 20:37:06.791639 containerd[1490]: time="2025-01-13T20:37:06.791636144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:37:06.830890 containerd[1490]: time="2025-01-13T20:37:06.830849564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:37:07.055658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:37:07.065067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:07.207716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:07.215111 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:07.750046 kubelet[2048]: E0113 20:37:07.749951 2048 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:07.755346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:07.755595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:10.223604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211185561.mount: Deactivated successfully. Jan 13 20:37:12.821538 containerd[1490]: time="2025-01-13T20:37:12.821471676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:12.822228 containerd[1490]: time="2025-01-13T20:37:12.822183275Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:37:12.823399 containerd[1490]: time="2025-01-13T20:37:12.823357218Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:12.825934 containerd[1490]: time="2025-01-13T20:37:12.825891870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:12.827052 containerd[1490]: time="2025-01-13T20:37:12.827022469Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 5.996135774s" Jan 13 20:37:12.827052 containerd[1490]: time="2025-01-13T20:37:12.827048479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:37:12.850401 containerd[1490]: time="2025-01-13T20:37:12.850281382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:37:13.465738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646904232.mount: Deactivated successfully. Jan 13 20:37:13.483971 containerd[1490]: time="2025-01-13T20:37:13.483924616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:13.489545 containerd[1490]: time="2025-01-13T20:37:13.489505606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:37:13.491130 containerd[1490]: time="2025-01-13T20:37:13.491096401Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:13.494058 containerd[1490]: time="2025-01-13T20:37:13.494025410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:13.494857 containerd[1490]: time="2025-01-13T20:37:13.494809456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 644.490232ms" Jan 13 20:37:13.494902 containerd[1490]: time="2025-01-13T20:37:13.494856446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:37:13.515544 containerd[1490]: time="2025-01-13T20:37:13.515502141Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:37:14.100228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643676267.mount: Deactivated successfully. Jan 13 20:37:16.792284 containerd[1490]: time="2025-01-13T20:37:16.792222479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:16.793026 containerd[1490]: time="2025-01-13T20:37:16.792945876Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:37:16.794395 containerd[1490]: time="2025-01-13T20:37:16.794333055Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:16.797833 containerd[1490]: time="2025-01-13T20:37:16.797778880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:16.799648 containerd[1490]: time="2025-01-13T20:37:16.799583443Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.284045663s" Jan 13 20:37:16.799648 containerd[1490]: time="2025-01-13T20:37:16.799625603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:37:17.805545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:37:17.814417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:17.957098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:17.961855 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:18.017271 kubelet[2246]: E0113 20:37:18.017188 2246 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:18.021909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:18.022123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:18.961960 update_engine[1481]: I20250113 20:37:18.961849 1481 update_attempter.cc:509] Updating boot flags... Jan 13 20:37:19.213864 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2262) Jan 13 20:37:19.259023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2263) Jan 13 20:37:19.476064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:19.487018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:19.504290 systemd[1]: Reloading requested from client PID 2276 ('systemctl') (unit session-9.scope)... Jan 13 20:37:19.504315 systemd[1]: Reloading... Jan 13 20:37:19.578850 zram_generator::config[2318]: No configuration found. Jan 13 20:37:20.204025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:20.280787 systemd[1]: Reloading finished in 776 ms. Jan 13 20:37:20.332238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:20.335933 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:20.337550 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:37:20.337931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:20.348287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:20.495267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:20.500660 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:37:20.548977 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:20.548977 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:37:20.548977 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:20.549333 kubelet[2365]: I0113 20:37:20.549018 2365 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:37:20.926057 kubelet[2365]: I0113 20:37:20.926002 2365 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:37:20.926057 kubelet[2365]: I0113 20:37:20.926043 2365 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:37:20.926289 kubelet[2365]: I0113 20:37:20.926266 2365 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:37:20.961746 kubelet[2365]: E0113 20:37:20.961709 2365 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:20.963553 kubelet[2365]: I0113 20:37:20.963507 2365 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:37:21.049577 kubelet[2365]: I0113 20:37:21.049538 2365 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:37:21.049891 kubelet[2365]: I0113 20:37:21.049866 2365 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:37:21.050067 kubelet[2365]: I0113 20:37:21.050043 2365 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:37:21.060575 kubelet[2365]: I0113 20:37:21.060533 2365 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:37:21.060575 kubelet[2365]: I0113 20:37:21.060561 2365 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:37:21.060737 kubelet[2365]: I0113 20:37:21.060708 2365 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:21.060865 kubelet[2365]: I0113 20:37:21.060841 2365 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:37:21.060865 kubelet[2365]: I0113 20:37:21.060859 2365 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:37:21.060918 kubelet[2365]: I0113 20:37:21.060884 2365 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:37:21.060918 kubelet[2365]: I0113 20:37:21.060899 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:37:21.061564 kubelet[2365]: W0113 20:37:21.061504 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.061603 kubelet[2365]: E0113 20:37:21.061563 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.061983 kubelet[2365]: W0113 20:37:21.061935 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.061983 kubelet[2365]: E0113 20:37:21.061975 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.064298 kubelet[2365]: I0113 20:37:21.064256 2365 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:37:21.068547 kubelet[2365]: I0113 20:37:21.068519 2365 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:37:21.081474 kubelet[2365]: W0113 20:37:21.081435 2365 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:37:21.082172 kubelet[2365]: I0113 20:37:21.082041 2365 server.go:1256] "Started kubelet" Jan 13 20:37:21.082172 kubelet[2365]: I0113 20:37:21.082105 2365 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:37:21.082266 kubelet[2365]: I0113 20:37:21.082257 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:37:21.082650 kubelet[2365]: I0113 20:37:21.082624 2365 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:37:21.083836 kubelet[2365]: I0113 20:37:21.083313 2365 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:37:21.083836 kubelet[2365]: I0113 20:37:21.083499 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:37:21.084886 kubelet[2365]: E0113 20:37:21.084608 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:21.084886 kubelet[2365]: I0113 20:37:21.084639 2365 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:37:21.084886 kubelet[2365]: I0113 20:37:21.084737 2365 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:37:21.084886 kubelet[2365]: I0113 20:37:21.084781 2365 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:37:21.085167 kubelet[2365]: W0113 20:37:21.085116 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.085167 kubelet[2365]: E0113 20:37:21.085168 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.086222 kubelet[2365]: E0113 20:37:21.086198 2365 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:37:21.086878 kubelet[2365]: I0113 20:37:21.086857 2365 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:37:21.086970 kubelet[2365]: I0113 20:37:21.086950 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:37:21.089948 kubelet[2365]: I0113 20:37:21.087883 2365 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:37:21.090627 kubelet[2365]: E0113 20:37:21.090606 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jan 13 20:37:21.093475 kubelet[2365]: E0113 20:37:21.093413 2365 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5afb295d3d80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:37:21.082015104 +0000 UTC m=+0.576810650,LastTimestamp:2025-01-13 20:37:21.082015104 +0000 UTC m=+0.576810650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:37:21.100886 kubelet[2365]: I0113 20:37:21.100842 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:37:21.102246 kubelet[2365]: I0113 20:37:21.102221 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:37:21.102246 kubelet[2365]: I0113 20:37:21.102246 2365 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:37:21.102312 kubelet[2365]: I0113 20:37:21.102261 2365 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:37:21.102312 kubelet[2365]: E0113 20:37:21.102301 2365 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:37:21.137468 kubelet[2365]: W0113 20:37:21.137386 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.137468 kubelet[2365]: E0113 20:37:21.137439 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.142386 kubelet[2365]: I0113 20:37:21.142348 2365 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:37:21.142386 kubelet[2365]: I0113 20:37:21.142364 2365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:37:21.142473 kubelet[2365]: I0113 20:37:21.142395 2365 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:21.186796 kubelet[2365]: I0113 20:37:21.186704 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:21.187017 kubelet[2365]: E0113 20:37:21.186981 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:37:21.203290 kubelet[2365]: E0113 20:37:21.203255 2365 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:37:21.291188 kubelet[2365]: E0113 20:37:21.291137 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jan 13 20:37:21.390400 kubelet[2365]: I0113 20:37:21.390372 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:21.391018 kubelet[2365]: E0113 20:37:21.390984 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:37:21.403419 kubelet[2365]: E0113 20:37:21.403392 2365 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:37:21.691949 kubelet[2365]: E0113 20:37:21.691917 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jan 13 20:37:21.792301 kubelet[2365]: I0113 20:37:21.792267 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:21.792555 kubelet[2365]: E0113 20:37:21.792534 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:37:21.803731 kubelet[2365]: E0113 20:37:21.803702 2365 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:37:21.979915 kubelet[2365]: W0113 20:37:21.979801 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:21.979915 kubelet[2365]: E0113 20:37:21.979867 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.231000 kubelet[2365]: W0113 20:37:22.230893 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.231000 kubelet[2365]: E0113 20:37:22.230955 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.344808 kubelet[2365]: W0113 20:37:22.344743 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.344808 kubelet[2365]: E0113 20:37:22.344809 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.492944 kubelet[2365]: E0113 20:37:22.492875 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Jan 13 20:37:22.594673 kubelet[2365]: I0113 20:37:22.594637 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:22.595081 kubelet[2365]: E0113 20:37:22.595054 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:37:22.604322 kubelet[2365]: E0113 20:37:22.604268 2365 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:37:22.637795 kubelet[2365]: W0113 20:37:22.637731 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.637795 kubelet[2365]: E0113 20:37:22.637789 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:22.965066 kubelet[2365]: E0113 20:37:22.965024 2365 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:23.316663 kubelet[2365]: I0113 20:37:23.316605 2365 policy_none.go:49] "None policy: Start" Jan 13 20:37:23.318022 kubelet[2365]: I0113 20:37:23.317978 2365 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:37:23.318092 kubelet[2365]: I0113 20:37:23.318029 2365 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:37:23.327224 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:37:23.350502 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:37:23.369345 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:37:23.371895 kubelet[2365]: I0113 20:37:23.370870 2365 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:37:23.371895 kubelet[2365]: I0113 20:37:23.371181 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:37:23.372622 kubelet[2365]: E0113 20:37:23.372594 2365 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:37:23.435644 kubelet[2365]: E0113 20:37:23.435608 2365 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5afb295d3d80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:37:21.082015104 +0000 UTC m=+0.576810650,LastTimestamp:2025-01-13 20:37:21.082015104 +0000 UTC m=+0.576810650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:37:23.879674 kubelet[2365]: W0113 20:37:23.879611 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:23.879674 kubelet[2365]: E0113 20:37:23.879664 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.093650 kubelet[2365]: E0113 20:37:24.093594 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="3.2s" Jan 13 20:37:24.196750 kubelet[2365]: I0113 20:37:24.196623 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:24.197038 kubelet[2365]: E0113 20:37:24.197012 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:37:24.205220 kubelet[2365]: I0113 20:37:24.205179 2365 topology_manager.go:215] "Topology Admit Handler" podUID="9e50fa01d4d7aab204bca9e1254d4123" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:37:24.206030 kubelet[2365]: I0113 20:37:24.205985 2365 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:37:24.206676 kubelet[2365]: I0113 20:37:24.206654 2365 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:37:24.212463 systemd[1]: Created slice kubepods-burstable-pod9e50fa01d4d7aab204bca9e1254d4123.slice - libcontainer container kubepods-burstable-pod9e50fa01d4d7aab204bca9e1254d4123.slice. Jan 13 20:37:24.222533 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:37:24.236634 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:37:24.305584 kubelet[2365]: I0113 20:37:24.305529 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:24.305584 kubelet[2365]: I0113 20:37:24.305581 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:24.305747 kubelet[2365]: I0113 20:37:24.305613 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:24.305747 kubelet[2365]: I0113 20:37:24.305690 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:24.305836 kubelet[2365]: I0113 20:37:24.305769 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:24.305836 kubelet[2365]: I0113 20:37:24.305811 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:24.305922 kubelet[2365]: I0113 20:37:24.305859 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:37:24.305922 kubelet[2365]: I0113 20:37:24.305883 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:24.305922 kubelet[2365]: I0113 20:37:24.305903 2365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:24.492308 kubelet[2365]: W0113 20:37:24.492163 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.492308 kubelet[2365]: E0113 20:37:24.492215 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.521636 kubelet[2365]: E0113 20:37:24.521582 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:24.522339 containerd[1490]: time="2025-01-13T20:37:24.522274516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e50fa01d4d7aab204bca9e1254d4123,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:24.534512 kubelet[2365]: E0113 20:37:24.534468 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:24.534995 containerd[1490]: time="2025-01-13T20:37:24.534944561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:24.539137 kubelet[2365]: E0113 20:37:24.539099 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:24.539430 containerd[1490]: time="2025-01-13T20:37:24.539383366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:24.551090 kubelet[2365]: W0113 20:37:24.551036 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.551169 kubelet[2365]: E0113 20:37:24.551094 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.731619 kubelet[2365]: W0113 20:37:24.731562 2365 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:24.731619 kubelet[2365]: E0113 20:37:24.731614 2365 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:37:25.113448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542214961.mount: Deactivated successfully. Jan 13 20:37:25.125064 containerd[1490]: time="2025-01-13T20:37:25.125000588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:25.131310 containerd[1490]: time="2025-01-13T20:37:25.131223362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:37:25.132494 containerd[1490]: time="2025-01-13T20:37:25.132438419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:25.134748 containerd[1490]: time="2025-01-13T20:37:25.134680597Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:25.135503 containerd[1490]: time="2025-01-13T20:37:25.135443440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:37:25.136512 containerd[1490]: time="2025-01-13T20:37:25.136444162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:25.137366 containerd[1490]: time="2025-01-13T20:37:25.137315819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:37:25.138419 containerd[1490]: time="2025-01-13T20:37:25.138378278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:25.140454 containerd[1490]: time="2025-01-13T20:37:25.140406583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.932865ms" Jan 13 20:37:25.141281 containerd[1490]: time="2025-01-13T20:37:25.141249657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.84768ms" Jan 13 20:37:25.150066 containerd[1490]: time="2025-01-13T20:37:25.150004096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.939989ms" Jan 13 20:37:25.360225 containerd[1490]: time="2025-01-13T20:37:25.359999477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:25.360725 containerd[1490]: time="2025-01-13T20:37:25.360519681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:25.360725 containerd[1490]: time="2025-01-13T20:37:25.360555219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.360809 containerd[1490]: time="2025-01-13T20:37:25.360671849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.367199 containerd[1490]: time="2025-01-13T20:37:25.359800331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:25.367388 containerd[1490]: time="2025-01-13T20:37:25.366925801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:25.367388 containerd[1490]: time="2025-01-13T20:37:25.366942644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.367388 containerd[1490]: time="2025-01-13T20:37:25.367040819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.371840 containerd[1490]: time="2025-01-13T20:37:25.371620127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:25.371840 containerd[1490]: time="2025-01-13T20:37:25.371668908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:25.371840 containerd[1490]: time="2025-01-13T20:37:25.371680801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.371945 containerd[1490]: time="2025-01-13T20:37:25.371778686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:25.405034 systemd[1]: Started cri-containerd-82893251894657ce8a6e45bc06c5f9f79ab59ed76b95b3a37a5465307f5a1bef.scope - libcontainer container 82893251894657ce8a6e45bc06c5f9f79ab59ed76b95b3a37a5465307f5a1bef. Jan 13 20:37:25.410363 systemd[1]: Started cri-containerd-86957ab3ead5060de4b83087b565caaa690a86f5a6f6cc575a8953ad41e7c7fd.scope - libcontainer container 86957ab3ead5060de4b83087b565caaa690a86f5a6f6cc575a8953ad41e7c7fd. Jan 13 20:37:25.412806 systemd[1]: Started cri-containerd-f1c57b4d0dae15db9d70e647f505b049583f2f9a162cfea746c5fff30fc114b5.scope - libcontainer container f1c57b4d0dae15db9d70e647f505b049583f2f9a162cfea746c5fff30fc114b5. Jan 13 20:37:25.488095 containerd[1490]: time="2025-01-13T20:37:25.483778770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"82893251894657ce8a6e45bc06c5f9f79ab59ed76b95b3a37a5465307f5a1bef\"" Jan 13 20:37:25.488276 kubelet[2365]: E0113 20:37:25.485233 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:25.490148 containerd[1490]: time="2025-01-13T20:37:25.489705824Z" level=info msg="CreateContainer within sandbox \"82893251894657ce8a6e45bc06c5f9f79ab59ed76b95b3a37a5465307f5a1bef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:37:25.491405 containerd[1490]: time="2025-01-13T20:37:25.491253972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86957ab3ead5060de4b83087b565caaa690a86f5a6f6cc575a8953ad41e7c7fd\"" Jan 13 20:37:25.492279 kubelet[2365]: E0113 20:37:25.492236 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:25.495540 containerd[1490]: time="2025-01-13T20:37:25.495507843Z" level=info msg="CreateContainer within sandbox \"86957ab3ead5060de4b83087b565caaa690a86f5a6f6cc575a8953ad41e7c7fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:37:25.556406 containerd[1490]: time="2025-01-13T20:37:25.556361556Z" level=info msg="CreateContainer within sandbox \"86957ab3ead5060de4b83087b565caaa690a86f5a6f6cc575a8953ad41e7c7fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc927e266ec91afcdbd4e4cb721811e36c23eb84cb6f89953043623a9f2fadd7\"" Jan 13 20:37:25.557624 containerd[1490]: time="2025-01-13T20:37:25.557499346Z" level=info msg="StartContainer for \"dc927e266ec91afcdbd4e4cb721811e36c23eb84cb6f89953043623a9f2fadd7\"" Jan 13 20:37:25.557842 containerd[1490]: time="2025-01-13T20:37:25.557794455Z" level=info msg="CreateContainer within sandbox \"82893251894657ce8a6e45bc06c5f9f79ab59ed76b95b3a37a5465307f5a1bef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a674141e1abd5a9ceeff08a052ef69d6597445801b5d7454b0e04a0d20769a7\"" Jan 13 20:37:25.558648 containerd[1490]: time="2025-01-13T20:37:25.558615276Z" level=info msg="StartContainer for \"2a674141e1abd5a9ceeff08a052ef69d6597445801b5d7454b0e04a0d20769a7\"" Jan 13 20:37:25.560555 containerd[1490]: time="2025-01-13T20:37:25.560528223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e50fa01d4d7aab204bca9e1254d4123,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c57b4d0dae15db9d70e647f505b049583f2f9a162cfea746c5fff30fc114b5\"" Jan 13 20:37:25.561686 kubelet[2365]: E0113 20:37:25.561608 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:25.566207 containerd[1490]: time="2025-01-13T20:37:25.566175629Z" level=info msg="CreateContainer within sandbox \"f1c57b4d0dae15db9d70e647f505b049583f2f9a162cfea746c5fff30fc114b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:37:25.596880 containerd[1490]: time="2025-01-13T20:37:25.596802088Z" level=info msg="CreateContainer within sandbox \"f1c57b4d0dae15db9d70e647f505b049583f2f9a162cfea746c5fff30fc114b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4196c72c223e10e5fcca15a993a0809e1bd7044878a03d80e8055d7ce3574ec\"" Jan 13 20:37:25.597340 containerd[1490]: time="2025-01-13T20:37:25.597312041Z" level=info msg="StartContainer for \"c4196c72c223e10e5fcca15a993a0809e1bd7044878a03d80e8055d7ce3574ec\"" Jan 13 20:37:25.601138 systemd[1]: Started cri-containerd-2a674141e1abd5a9ceeff08a052ef69d6597445801b5d7454b0e04a0d20769a7.scope - libcontainer container 2a674141e1abd5a9ceeff08a052ef69d6597445801b5d7454b0e04a0d20769a7. Jan 13 20:37:25.602801 systemd[1]: Started cri-containerd-dc927e266ec91afcdbd4e4cb721811e36c23eb84cb6f89953043623a9f2fadd7.scope - libcontainer container dc927e266ec91afcdbd4e4cb721811e36c23eb84cb6f89953043623a9f2fadd7. Jan 13 20:37:25.639122 systemd[1]: Started cri-containerd-c4196c72c223e10e5fcca15a993a0809e1bd7044878a03d80e8055d7ce3574ec.scope - libcontainer container c4196c72c223e10e5fcca15a993a0809e1bd7044878a03d80e8055d7ce3574ec. Jan 13 20:37:25.777174 containerd[1490]: time="2025-01-13T20:37:25.777116858Z" level=info msg="StartContainer for \"dc927e266ec91afcdbd4e4cb721811e36c23eb84cb6f89953043623a9f2fadd7\" returns successfully" Jan 13 20:37:25.777331 containerd[1490]: time="2025-01-13T20:37:25.777127148Z" level=info msg="StartContainer for \"2a674141e1abd5a9ceeff08a052ef69d6597445801b5d7454b0e04a0d20769a7\" returns successfully" Jan 13 20:37:25.798437 containerd[1490]: time="2025-01-13T20:37:25.798384675Z" level=info msg="StartContainer for \"c4196c72c223e10e5fcca15a993a0809e1bd7044878a03d80e8055d7ce3574ec\" returns successfully" Jan 13 20:37:26.158477 kubelet[2365]: E0113 20:37:26.158439 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:26.163569 kubelet[2365]: E0113 20:37:26.163535 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:26.165981 kubelet[2365]: E0113 20:37:26.165955 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:27.228750 kubelet[2365]: E0113 20:37:27.228709 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:27.311581 kubelet[2365]: E0113 20:37:27.311537 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:37:27.398745 kubelet[2365]: I0113 20:37:27.398702 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:27.459689 kubelet[2365]: I0113 20:37:27.459641 2365 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:37:27.522053 kubelet[2365]: E0113 20:37:27.521856 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:27.622561 kubelet[2365]: E0113 20:37:27.622514 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:27.723132 kubelet[2365]: E0113 20:37:27.723065 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:27.823953 kubelet[2365]: E0113 20:37:27.823902 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:27.924597 kubelet[2365]: E0113 20:37:27.924537 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.025373 kubelet[2365]: E0113 20:37:28.025302 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.126470 kubelet[2365]: E0113 20:37:28.126334 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.227054 kubelet[2365]: E0113 20:37:28.226983 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.327445 kubelet[2365]: E0113 20:37:28.327402 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.427999 kubelet[2365]: E0113 20:37:28.427881 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.529017 kubelet[2365]: E0113 20:37:28.528973 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.629592 kubelet[2365]: E0113 20:37:28.629532 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.730387 kubelet[2365]: E0113 20:37:28.730081 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.830632 kubelet[2365]: E0113 20:37:28.830590 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:28.931179 kubelet[2365]: E0113 20:37:28.931127 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:29.032098 kubelet[2365]: E0113 20:37:29.031903 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:29.132229 kubelet[2365]: E0113 20:37:29.132186 2365 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:37:29.734583 kubelet[2365]: E0113 20:37:29.734548 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:29.940437 kubelet[2365]: E0113 20:37:29.940400 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:30.067721 kubelet[2365]: I0113 20:37:30.067663 2365 apiserver.go:52] "Watching apiserver" Jan 13 20:37:30.084954 kubelet[2365]: I0113 20:37:30.084882 2365 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:37:30.115435 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-9.scope)... Jan 13 20:37:30.115455 systemd[1]: Reloading... Jan 13 20:37:30.212871 zram_generator::config[2681]: No configuration found. Jan 13 20:37:30.231730 kubelet[2365]: E0113 20:37:30.231700 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:30.231934 kubelet[2365]: E0113 20:37:30.231913 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:30.321605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:30.414256 systemd[1]: Reloading finished in 298 ms. Jan 13 20:37:30.467167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:30.491264 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:37:30.491565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:30.491623 systemd[1]: kubelet.service: Consumed 1.040s CPU time, 114.9M memory peak, 0B memory swap peak. Jan 13 20:37:30.503144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:30.646691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:30.652601 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:37:30.709392 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:30.709792 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:37:30.709792 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:30.709974 kubelet[2725]: I0113 20:37:30.709868 2725 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:37:30.714768 kubelet[2725]: I0113 20:37:30.714730 2725 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:37:30.714768 kubelet[2725]: I0113 20:37:30.714763 2725 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:37:30.715024 kubelet[2725]: I0113 20:37:30.715008 2725 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:37:30.718770 kubelet[2725]: I0113 20:37:30.718734 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:37:30.720973 kubelet[2725]: I0113 20:37:30.720753 2725 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:37:30.728020 sudo[2741]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:37:30.728432 sudo[2741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:37:30.730249 kubelet[2725]: I0113 20:37:30.730202 2725 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:37:30.730543 kubelet[2725]: I0113 20:37:30.730520 2725 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:37:30.730795 kubelet[2725]: I0113 20:37:30.730769 2725 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:37:30.730888 kubelet[2725]: I0113 20:37:30.730814 2725 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:37:30.730888 kubelet[2725]: I0113 20:37:30.730843 2725 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:37:30.730888 kubelet[2725]: I0113 20:37:30.730878 2725 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:30.731046 kubelet[2725]: I0113 20:37:30.731017 2725 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:37:30.731084 kubelet[2725]: I0113 20:37:30.731050 2725 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:37:30.731128 kubelet[2725]: I0113 20:37:30.731117 2725 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:37:30.734563 kubelet[2725]: I0113 20:37:30.734533 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:37:30.735408 kubelet[2725]: I0113 20:37:30.735338 2725 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:37:30.735570 kubelet[2725]: I0113 20:37:30.735551 2725 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:37:30.737183 kubelet[2725]: I0113 20:37:30.737158 2725 server.go:1256] "Started kubelet" Jan 13 20:37:30.740327 kubelet[2725]: I0113 20:37:30.737698 2725 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:37:30.740327 kubelet[2725]: I0113 20:37:30.738609 2725 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:37:30.741504 kubelet[2725]: I0113 20:37:30.741474 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:37:30.742136 kubelet[2725]: I0113 20:37:30.742090 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:37:30.743456 kubelet[2725]: I0113 20:37:30.743433 2725 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:37:30.753155 kubelet[2725]: I0113 20:37:30.753098 2725 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:37:30.753670 kubelet[2725]: I0113 20:37:30.753559 2725 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:37:30.754140 kubelet[2725]: I0113 20:37:30.754061 2725 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:37:30.758791 kubelet[2725]: I0113 20:37:30.758518 2725 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:37:30.758791 kubelet[2725]: I0113 20:37:30.758603 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:37:30.759312 kubelet[2725]: E0113 20:37:30.759268 2725 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:37:30.760778 kubelet[2725]: I0113 20:37:30.760665 2725 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:37:30.765877 kubelet[2725]: I0113 20:37:30.765847 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:37:30.767902 kubelet[2725]: I0113 20:37:30.767809 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:37:30.767939 kubelet[2725]: I0113 20:37:30.767905 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:37:30.767939 kubelet[2725]: I0113 20:37:30.767925 2725 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:37:30.768023 kubelet[2725]: E0113 20:37:30.767980 2725 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:37:30.805687 kubelet[2725]: I0113 20:37:30.805658 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:37:30.805848 kubelet[2725]: I0113 20:37:30.805837 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:37:30.805934 kubelet[2725]: I0113 20:37:30.805924 2725 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:30.806125 kubelet[2725]: I0113 20:37:30.806113 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:37:30.806187 kubelet[2725]: I0113 20:37:30.806179 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:37:30.806236 kubelet[2725]: I0113 20:37:30.806228 2725 policy_none.go:49] "None policy: Start" Jan 13 20:37:30.806989 kubelet[2725]: I0113 20:37:30.806977 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:37:30.807142 kubelet[2725]: I0113 20:37:30.807131 2725 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:37:30.807319 kubelet[2725]: I0113 20:37:30.807307 2725 state_mem.go:75] "Updated machine memory state" Jan 13 20:37:30.812180 kubelet[2725]: I0113 20:37:30.812148 2725 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:37:30.812651 kubelet[2725]: I0113 20:37:30.812618 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:37:30.868622 kubelet[2725]: I0113 20:37:30.868567 2725 topology_manager.go:215] "Topology Admit Handler" podUID="9e50fa01d4d7aab204bca9e1254d4123" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:37:30.868721 kubelet[2725]: I0113 20:37:30.868681 2725 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:37:30.868721 kubelet[2725]: I0113 20:37:30.868716 2725 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:37:30.921700 kubelet[2725]: I0113 20:37:30.921605 2725 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:37:30.929682 kubelet[2725]: E0113 20:37:30.929635 2725 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:37:30.964914 kubelet[2725]: E0113 20:37:30.964865 2725 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:31.028939 kubelet[2725]: I0113 20:37:31.028803 2725 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:37:31.028939 kubelet[2725]: I0113 20:37:31.028908 2725 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:37:31.054972 kubelet[2725]: I0113 20:37:31.054917 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:31.054972 kubelet[2725]: I0113 20:37:31.054967 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:31.054972 kubelet[2725]: I0113 20:37:31.054993 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:31.055366 kubelet[2725]: I0113 20:37:31.055015 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:31.055366 kubelet[2725]: I0113 20:37:31.055046 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:31.055366 kubelet[2725]: I0113 20:37:31.055069 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:31.055366 kubelet[2725]: I0113 20:37:31.055092 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:37:31.055366 kubelet[2725]: I0113 20:37:31.055114 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:37:31.055503 kubelet[2725]: I0113 20:37:31.055134 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e50fa01d4d7aab204bca9e1254d4123-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e50fa01d4d7aab204bca9e1254d4123\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:31.225120 sudo[2741]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:31.232161 kubelet[2725]: E0113 20:37:31.232045 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.232161 kubelet[2725]: E0113 20:37:31.232075 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.266127 kubelet[2725]: E0113 20:37:31.265856 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.735434 kubelet[2725]: I0113 20:37:31.735382 2725 apiserver.go:52] "Watching apiserver" Jan 13 20:37:31.754297 kubelet[2725]: I0113 20:37:31.754243 2725 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:37:31.788080 kubelet[2725]: E0113 20:37:31.787623 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.793205 kubelet[2725]: E0113 20:37:31.793174 2725 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:37:31.793659 kubelet[2725]: E0113 20:37:31.793642 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.793728 kubelet[2725]: E0113 20:37:31.793711 2725 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:37:31.794196 kubelet[2725]: E0113 20:37:31.794178 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:31.805226 kubelet[2725]: I0113 20:37:31.805192 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8051552549999998 podStartE2EDuration="1.805155255s" podCreationTimestamp="2025-01-13 20:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:37:31.804999592 +0000 UTC m=+1.141372089" watchObservedRunningTime="2025-01-13 20:37:31.805155255 +0000 UTC m=+1.141527732" Jan 13 20:37:31.811694 kubelet[2725]: I0113 20:37:31.811654 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.811612882 podStartE2EDuration="2.811612882s" podCreationTimestamp="2025-01-13 20:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:37:31.811511861 +0000 UTC m=+1.147884348" watchObservedRunningTime="2025-01-13 20:37:31.811612882 +0000 UTC m=+1.147985369" Jan 13 20:37:31.818140 kubelet[2725]: I0113 20:37:31.818107 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.818069196 podStartE2EDuration="2.818069196s" podCreationTimestamp="2025-01-13 20:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:37:31.817891591 +0000 UTC m=+1.154264078" watchObservedRunningTime="2025-01-13 20:37:31.818069196 +0000 UTC m=+1.154441683" Jan 13 20:37:32.627769 sudo[1693]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:32.629285 sshd[1688]: Connection closed by 10.0.0.1 port 54166 Jan 13 20:37:32.629951 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:32.634677 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:54166.service: Deactivated successfully. Jan 13 20:37:32.636655 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:37:32.636867 systemd[1]: session-9.scope: Consumed 5.076s CPU time, 187.1M memory peak, 0B memory swap peak. Jan 13 20:37:32.637451 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:37:32.638407 systemd-logind[1479]: Removed session 9. Jan 13 20:37:32.788855 kubelet[2725]: E0113 20:37:32.788796 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:32.789314 kubelet[2725]: E0113 20:37:32.789211 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:34.466115 kubelet[2725]: E0113 20:37:34.466071 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:40.059164 kubelet[2725]: E0113 20:37:40.058601 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:40.806632 kubelet[2725]: E0113 20:37:40.806593 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:41.610099 kubelet[2725]: E0113 20:37:41.610049 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:44.471091 kubelet[2725]: E0113 20:37:44.471044 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:45.271203 kubelet[2725]: I0113 20:37:45.271171 2725 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:37:45.271504 containerd[1490]: time="2025-01-13T20:37:45.271466573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:37:45.271931 kubelet[2725]: I0113 20:37:45.271687 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:37:45.389439 kubelet[2725]: I0113 20:37:45.389401 2725 topology_manager.go:215] "Topology Admit Handler" podUID="63907c0d-d564-4334-ab46-2416cbdd2615" podNamespace="kube-system" podName="kube-proxy-crrt6" Jan 13 20:37:45.401854 kubelet[2725]: I0113 20:37:45.399343 2725 topology_manager.go:215] "Topology Admit Handler" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" podNamespace="kube-system" podName="cilium-9cbbj" Jan 13 20:37:45.399882 systemd[1]: Created slice kubepods-besteffort-pod63907c0d_d564_4334_ab46_2416cbdd2615.slice - libcontainer container kubepods-besteffort-pod63907c0d_d564_4334_ab46_2416cbdd2615.slice. Jan 13 20:37:45.413336 systemd[1]: Created slice kubepods-burstable-podad36e6ba_c8b3_45da_a1e8_258425c0c1c7.slice - libcontainer container kubepods-burstable-podad36e6ba_c8b3_45da_a1e8_258425c0c1c7.slice. Jan 13 20:37:45.547006 kubelet[2725]: I0113 20:37:45.546878 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hostproc\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547006 kubelet[2725]: I0113 20:37:45.546932 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hubble-tls\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547006 kubelet[2725]: I0113 20:37:45.546961 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb22d\" (UniqueName: \"kubernetes.io/projected/63907c0d-d564-4334-ab46-2416cbdd2615-kube-api-access-vb22d\") pod \"kube-proxy-crrt6\" (UID: \"63907c0d-d564-4334-ab46-2416cbdd2615\") " pod="kube-system/kube-proxy-crrt6" Jan 13 20:37:45.547006 kubelet[2725]: I0113 20:37:45.546986 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cni-path\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547006 kubelet[2725]: I0113 20:37:45.547009 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-bpf-maps\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547531 kubelet[2725]: I0113 20:37:45.547034 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-kernel\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547531 kubelet[2725]: I0113 20:37:45.547078 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdsj\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547531 kubelet[2725]: I0113 20:37:45.547104 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63907c0d-d564-4334-ab46-2416cbdd2615-kube-proxy\") pod \"kube-proxy-crrt6\" (UID: \"63907c0d-d564-4334-ab46-2416cbdd2615\") " pod="kube-system/kube-proxy-crrt6" Jan 13 20:37:45.547531 kubelet[2725]: I0113 20:37:45.547138 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-clustermesh-secrets\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547531 kubelet[2725]: I0113 20:37:45.547168 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-net\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547200 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63907c0d-d564-4334-ab46-2416cbdd2615-xtables-lock\") pod \"kube-proxy-crrt6\" (UID: \"63907c0d-d564-4334-ab46-2416cbdd2615\") " pod="kube-system/kube-proxy-crrt6" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547224 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-cgroup\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547255 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-config-path\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547284 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63907c0d-d564-4334-ab46-2416cbdd2615-lib-modules\") pod \"kube-proxy-crrt6\" (UID: \"63907c0d-d564-4334-ab46-2416cbdd2615\") " pod="kube-system/kube-proxy-crrt6" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547318 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-run\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547674 kubelet[2725]: I0113 20:37:45.547335 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-etc-cni-netd\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547866 kubelet[2725]: I0113 20:37:45.547356 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-lib-modules\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.547866 kubelet[2725]: I0113 20:37:45.547442 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-xtables-lock\") pod \"cilium-9cbbj\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " pod="kube-system/cilium-9cbbj" Jan 13 20:37:45.656918 kubelet[2725]: E0113 20:37:45.656808 2725 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:37:45.656918 kubelet[2725]: E0113 20:37:45.656860 2725 projected.go:200] Error preparing data for projected volume kube-api-access-vb22d for pod kube-system/kube-proxy-crrt6: configmap "kube-root-ca.crt" not found Jan 13 20:37:45.656918 kubelet[2725]: E0113 20:37:45.656928 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63907c0d-d564-4334-ab46-2416cbdd2615-kube-api-access-vb22d podName:63907c0d-d564-4334-ab46-2416cbdd2615 nodeName:}" failed. No retries permitted until 2025-01-13 20:37:46.156903127 +0000 UTC m=+15.493275694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vb22d" (UniqueName: "kubernetes.io/projected/63907c0d-d564-4334-ab46-2416cbdd2615-kube-api-access-vb22d") pod "kube-proxy-crrt6" (UID: "63907c0d-d564-4334-ab46-2416cbdd2615") : configmap "kube-root-ca.crt" not found Jan 13 20:37:45.657388 kubelet[2725]: E0113 20:37:45.657363 2725 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:37:45.657490 kubelet[2725]: E0113 20:37:45.657395 2725 projected.go:200] Error preparing data for projected volume kube-api-access-dhdsj for pod kube-system/cilium-9cbbj: configmap "kube-root-ca.crt" not found Jan 13 20:37:45.657490 kubelet[2725]: E0113 20:37:45.657452 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj podName:ad36e6ba-c8b3-45da-a1e8-258425c0c1c7 nodeName:}" failed. No retries permitted until 2025-01-13 20:37:46.157433604 +0000 UTC m=+15.493806161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dhdsj" (UniqueName: "kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj") pod "cilium-9cbbj" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7") : configmap "kube-root-ca.crt" not found Jan 13 20:37:46.309576 kubelet[2725]: E0113 20:37:46.309523 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.310173 containerd[1490]: time="2025-01-13T20:37:46.310131379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crrt6,Uid:63907c0d-d564-4334-ab46-2416cbdd2615,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:46.315740 kubelet[2725]: E0113 20:37:46.315700 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.316119 containerd[1490]: time="2025-01-13T20:37:46.316076301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cbbj,Uid:ad36e6ba-c8b3-45da-a1e8-258425c0c1c7,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:46.356143 containerd[1490]: time="2025-01-13T20:37:46.356001441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:46.356769 containerd[1490]: time="2025-01-13T20:37:46.356703090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:46.357190 containerd[1490]: time="2025-01-13T20:37:46.356795854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.357614 containerd[1490]: time="2025-01-13T20:37:46.357304209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.363837 containerd[1490]: time="2025-01-13T20:37:46.363648111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:46.364212 containerd[1490]: time="2025-01-13T20:37:46.363896307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:46.366117 containerd[1490]: time="2025-01-13T20:37:46.365484983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.366117 containerd[1490]: time="2025-01-13T20:37:46.365914871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.385987 systemd[1]: Started cri-containerd-cbb2c3a08ef9d3b815bcfcae9c20503b5023eb5ee32f5029fe8eb315c5437fbe.scope - libcontainer container cbb2c3a08ef9d3b815bcfcae9c20503b5023eb5ee32f5029fe8eb315c5437fbe. Jan 13 20:37:46.391322 systemd[1]: Started cri-containerd-3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75.scope - libcontainer container 3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75. Jan 13 20:37:46.418646 containerd[1490]: time="2025-01-13T20:37:46.418587235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crrt6,Uid:63907c0d-d564-4334-ab46-2416cbdd2615,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbb2c3a08ef9d3b815bcfcae9c20503b5023eb5ee32f5029fe8eb315c5437fbe\"" Jan 13 20:37:46.419700 kubelet[2725]: E0113 20:37:46.419678 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.421212 containerd[1490]: time="2025-01-13T20:37:46.421155712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cbbj,Uid:ad36e6ba-c8b3-45da-a1e8-258425c0c1c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\"" Jan 13 20:37:46.421948 kubelet[2725]: E0113 20:37:46.421921 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.423105 containerd[1490]: time="2025-01-13T20:37:46.423073186Z" level=info msg="CreateContainer within sandbox \"cbb2c3a08ef9d3b815bcfcae9c20503b5023eb5ee32f5029fe8eb315c5437fbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:37:46.423669 containerd[1490]: time="2025-01-13T20:37:46.423640522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:37:46.441484 kubelet[2725]: I0113 20:37:46.440243 2725 topology_manager.go:215] "Topology Admit Handler" podUID="6e955397-8494-4322-a39c-650b18477d62" podNamespace="kube-system" podName="cilium-operator-5cc964979-wbwxr" Jan 13 20:37:46.448878 systemd[1]: Created slice kubepods-besteffort-pod6e955397_8494_4322_a39c_650b18477d62.slice - libcontainer container kubepods-besteffort-pod6e955397_8494_4322_a39c_650b18477d62.slice. Jan 13 20:37:46.460747 containerd[1490]: time="2025-01-13T20:37:46.460529215Z" level=info msg="CreateContainer within sandbox \"cbb2c3a08ef9d3b815bcfcae9c20503b5023eb5ee32f5029fe8eb315c5437fbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48eaaf07281f9356228cf486476e4399b26d1f54a17017cb4fbe23ea43d3b50a\"" Jan 13 20:37:46.465003 containerd[1490]: time="2025-01-13T20:37:46.463026959Z" level=info msg="StartContainer for \"48eaaf07281f9356228cf486476e4399b26d1f54a17017cb4fbe23ea43d3b50a\"" Jan 13 20:37:46.496973 systemd[1]: Started cri-containerd-48eaaf07281f9356228cf486476e4399b26d1f54a17017cb4fbe23ea43d3b50a.scope - libcontainer container 48eaaf07281f9356228cf486476e4399b26d1f54a17017cb4fbe23ea43d3b50a. Jan 13 20:37:46.536157 containerd[1490]: time="2025-01-13T20:37:46.536100673Z" level=info msg="StartContainer for \"48eaaf07281f9356228cf486476e4399b26d1f54a17017cb4fbe23ea43d3b50a\" returns successfully" Jan 13 20:37:46.554757 kubelet[2725]: I0113 20:37:46.554587 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e955397-8494-4322-a39c-650b18477d62-cilium-config-path\") pod \"cilium-operator-5cc964979-wbwxr\" (UID: \"6e955397-8494-4322-a39c-650b18477d62\") " pod="kube-system/cilium-operator-5cc964979-wbwxr" Jan 13 20:37:46.554757 kubelet[2725]: I0113 20:37:46.554659 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24pxl\" (UniqueName: \"kubernetes.io/projected/6e955397-8494-4322-a39c-650b18477d62-kube-api-access-24pxl\") pod \"cilium-operator-5cc964979-wbwxr\" (UID: \"6e955397-8494-4322-a39c-650b18477d62\") " pod="kube-system/cilium-operator-5cc964979-wbwxr" Jan 13 20:37:46.754986 kubelet[2725]: E0113 20:37:46.754472 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.755248 containerd[1490]: time="2025-01-13T20:37:46.755146617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wbwxr,Uid:6e955397-8494-4322-a39c-650b18477d62,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:46.801737 containerd[1490]: time="2025-01-13T20:37:46.801408144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:46.801737 containerd[1490]: time="2025-01-13T20:37:46.801529331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:46.801737 containerd[1490]: time="2025-01-13T20:37:46.801547576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.801980 containerd[1490]: time="2025-01-13T20:37:46.801663734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:46.816001 kubelet[2725]: E0113 20:37:46.815966 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:46.832102 systemd[1]: Started cri-containerd-4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2.scope - libcontainer container 4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2. Jan 13 20:37:46.878202 containerd[1490]: time="2025-01-13T20:37:46.878095659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wbwxr,Uid:6e955397-8494-4322-a39c-650b18477d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\"" Jan 13 20:37:46.879016 kubelet[2725]: E0113 20:37:46.878993 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:53.381830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124074166.mount: Deactivated successfully. Jan 13 20:37:57.570329 containerd[1490]: time="2025-01-13T20:37:57.570269584Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:57.579323 containerd[1490]: time="2025-01-13T20:37:57.579271272Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735283" Jan 13 20:37:57.598433 containerd[1490]: time="2025-01-13T20:37:57.598400102Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:57.599770 containerd[1490]: time="2025-01-13T20:37:57.599742372Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.17606409s" Jan 13 20:37:57.599770 containerd[1490]: time="2025-01-13T20:37:57.599770615Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:37:57.600379 containerd[1490]: time="2025-01-13T20:37:57.600355143Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:37:57.602982 containerd[1490]: time="2025-01-13T20:37:57.602752442Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:37:57.754807 containerd[1490]: time="2025-01-13T20:37:57.754724726Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\"" Jan 13 20:37:57.755523 containerd[1490]: time="2025-01-13T20:37:57.755488190Z" level=info msg="StartContainer for \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\"" Jan 13 20:37:57.789028 systemd[1]: Started cri-containerd-24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1.scope - libcontainer container 24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1. Jan 13 20:37:57.829690 systemd[1]: cri-containerd-24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1.scope: Deactivated successfully. Jan 13 20:37:57.831319 containerd[1490]: time="2025-01-13T20:37:57.831252019Z" level=info msg="StartContainer for \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\" returns successfully" Jan 13 20:37:57.841200 kubelet[2725]: E0113 20:37:57.841169 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:57.918789 kubelet[2725]: I0113 20:37:57.918751 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-crrt6" podStartSLOduration=12.918684353 podStartE2EDuration="12.918684353s" podCreationTimestamp="2025-01-13 20:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:37:46.829413052 +0000 UTC m=+16.165785529" watchObservedRunningTime="2025-01-13 20:37:57.918684353 +0000 UTC m=+27.255056840" Jan 13 20:37:58.715964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1-rootfs.mount: Deactivated successfully. Jan 13 20:37:58.842709 kubelet[2725]: E0113 20:37:58.842661 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:59.083886 containerd[1490]: time="2025-01-13T20:37:59.083800667Z" level=info msg="shim disconnected" id=24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1 namespace=k8s.io Jan 13 20:37:59.083886 containerd[1490]: time="2025-01-13T20:37:59.083880077Z" level=warning msg="cleaning up after shim disconnected" id=24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1 namespace=k8s.io Jan 13 20:37:59.083886 containerd[1490]: time="2025-01-13T20:37:59.083891679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:37:59.542100 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:48950.service - OpenSSH per-connection server daemon (10.0.0.1:48950). Jan 13 20:37:59.593960 sshd[3183]: Accepted publickey for core from 10.0.0.1 port 48950 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:37:59.596183 sshd-session[3183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:59.601124 systemd-logind[1479]: New session 10 of user core. Jan 13 20:37:59.609964 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:37:59.735225 sshd[3185]: Connection closed by 10.0.0.1 port 48950 Jan 13 20:37:59.735618 sshd-session[3183]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:59.740147 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:48950.service: Deactivated successfully. Jan 13 20:37:59.742229 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:37:59.743279 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:37:59.744270 systemd-logind[1479]: Removed session 10. Jan 13 20:37:59.845572 kubelet[2725]: E0113 20:37:59.845332 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:37:59.851382 containerd[1490]: time="2025-01-13T20:37:59.851339721Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:37:59.873309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881925516.mount: Deactivated successfully. Jan 13 20:37:59.874425 containerd[1490]: time="2025-01-13T20:37:59.874374167Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\"" Jan 13 20:37:59.875092 containerd[1490]: time="2025-01-13T20:37:59.874871180Z" level=info msg="StartContainer for \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\"" Jan 13 20:37:59.907953 systemd[1]: Started cri-containerd-e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320.scope - libcontainer container e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320. Jan 13 20:37:59.937631 containerd[1490]: time="2025-01-13T20:37:59.937579978Z" level=info msg="StartContainer for \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\" returns successfully" Jan 13 20:37:59.948942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:37:59.949236 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:59.949336 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:59.957234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:59.957494 systemd[1]: cri-containerd-e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320.scope: Deactivated successfully. Jan 13 20:37:59.975037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:59.990382 containerd[1490]: time="2025-01-13T20:37:59.990332997Z" level=info msg="shim disconnected" id=e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320 namespace=k8s.io Jan 13 20:37:59.990382 containerd[1490]: time="2025-01-13T20:37:59.990380065Z" level=warning msg="cleaning up after shim disconnected" id=e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320 namespace=k8s.io Jan 13 20:37:59.990580 containerd[1490]: time="2025-01-13T20:37:59.990390295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:00.847777 kubelet[2725]: E0113 20:38:00.847747 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:00.849286 containerd[1490]: time="2025-01-13T20:38:00.849237087Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:38:00.871314 containerd[1490]: time="2025-01-13T20:38:00.869806996Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\"" Jan 13 20:38:00.871314 containerd[1490]: time="2025-01-13T20:38:00.870912662Z" level=info msg="StartContainer for \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\"" Jan 13 20:38:00.871267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320-rootfs.mount: Deactivated successfully. Jan 13 20:38:00.905965 systemd[1]: Started cri-containerd-750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f.scope - libcontainer container 750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f. Jan 13 20:38:00.934689 containerd[1490]: time="2025-01-13T20:38:00.934630206Z" level=info msg="StartContainer for \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\" returns successfully" Jan 13 20:38:00.935411 systemd[1]: cri-containerd-750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f.scope: Deactivated successfully. Jan 13 20:38:00.953989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f-rootfs.mount: Deactivated successfully. Jan 13 20:38:00.965107 containerd[1490]: time="2025-01-13T20:38:00.965044852Z" level=info msg="shim disconnected" id=750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f namespace=k8s.io Jan 13 20:38:00.965107 containerd[1490]: time="2025-01-13T20:38:00.965107429Z" level=warning msg="cleaning up after shim disconnected" id=750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f namespace=k8s.io Jan 13 20:38:00.965285 containerd[1490]: time="2025-01-13T20:38:00.965123109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:01.850759 kubelet[2725]: E0113 20:38:01.850730 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:01.852578 containerd[1490]: time="2025-01-13T20:38:01.852427950Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:38:01.872185 containerd[1490]: time="2025-01-13T20:38:01.872143062Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\"" Jan 13 20:38:01.872678 containerd[1490]: time="2025-01-13T20:38:01.872622131Z" level=info msg="StartContainer for \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\"" Jan 13 20:38:01.910025 systemd[1]: Started cri-containerd-c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931.scope - libcontainer container c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931. Jan 13 20:38:01.932270 systemd[1]: cri-containerd-c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931.scope: Deactivated successfully. Jan 13 20:38:01.934740 containerd[1490]: time="2025-01-13T20:38:01.934701715Z" level=info msg="StartContainer for \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\" returns successfully" Jan 13 20:38:01.958632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931-rootfs.mount: Deactivated successfully. Jan 13 20:38:01.965171 containerd[1490]: time="2025-01-13T20:38:01.965109876Z" level=info msg="shim disconnected" id=c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931 namespace=k8s.io Jan 13 20:38:01.965171 containerd[1490]: time="2025-01-13T20:38:01.965157696Z" level=warning msg="cleaning up after shim disconnected" id=c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931 namespace=k8s.io Jan 13 20:38:01.965171 containerd[1490]: time="2025-01-13T20:38:01.965165661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:02.854842 kubelet[2725]: E0113 20:38:02.854790 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:02.857449 containerd[1490]: time="2025-01-13T20:38:02.857396544Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:38:02.876020 containerd[1490]: time="2025-01-13T20:38:02.875970242Z" level=info msg="CreateContainer within sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\"" Jan 13 20:38:02.876486 containerd[1490]: time="2025-01-13T20:38:02.876454341Z" level=info msg="StartContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\"" Jan 13 20:38:02.904944 systemd[1]: Started cri-containerd-409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0.scope - libcontainer container 409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0. Jan 13 20:38:02.940157 containerd[1490]: time="2025-01-13T20:38:02.940110146Z" level=info msg="StartContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" returns successfully" Jan 13 20:38:03.087148 kubelet[2725]: I0113 20:38:03.086537 2725 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:38:03.106707 kubelet[2725]: I0113 20:38:03.106150 2725 topology_manager.go:215] "Topology Admit Handler" podUID="9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d" podNamespace="kube-system" podName="coredns-76f75df574-l9xbb" Jan 13 20:38:03.107813 kubelet[2725]: I0113 20:38:03.107798 2725 topology_manager.go:215] "Topology Admit Handler" podUID="b96ab59c-8db4-4097-bc26-8b1c7af5e6fb" podNamespace="kube-system" podName="coredns-76f75df574-vp9xh" Jan 13 20:38:03.111981 kubelet[2725]: W0113 20:38:03.111698 2725 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:38:03.111981 kubelet[2725]: E0113 20:38:03.111726 2725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:38:03.115115 systemd[1]: Created slice kubepods-burstable-pod9b9dbf12_9e6b_4ac6_a266_2b70701bbf6d.slice - libcontainer container kubepods-burstable-pod9b9dbf12_9e6b_4ac6_a266_2b70701bbf6d.slice. Jan 13 20:38:03.121345 systemd[1]: Created slice kubepods-burstable-podb96ab59c_8db4_4097_bc26_8b1c7af5e6fb.slice - libcontainer container kubepods-burstable-podb96ab59c_8db4_4097_bc26_8b1c7af5e6fb.slice. Jan 13 20:38:03.264813 kubelet[2725]: I0113 20:38:03.264770 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b96ab59c-8db4-4097-bc26-8b1c7af5e6fb-config-volume\") pod \"coredns-76f75df574-vp9xh\" (UID: \"b96ab59c-8db4-4097-bc26-8b1c7af5e6fb\") " pod="kube-system/coredns-76f75df574-vp9xh" Jan 13 20:38:03.264962 kubelet[2725]: I0113 20:38:03.264842 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjht\" (UniqueName: \"kubernetes.io/projected/9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d-kube-api-access-tdjht\") pod \"coredns-76f75df574-l9xbb\" (UID: \"9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d\") " pod="kube-system/coredns-76f75df574-l9xbb" Jan 13 20:38:03.264962 kubelet[2725]: I0113 20:38:03.264924 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d-config-volume\") pod \"coredns-76f75df574-l9xbb\" (UID: \"9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d\") " pod="kube-system/coredns-76f75df574-l9xbb" Jan 13 20:38:03.265030 kubelet[2725]: I0113 20:38:03.264961 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfg9\" (UniqueName: \"kubernetes.io/projected/b96ab59c-8db4-4097-bc26-8b1c7af5e6fb-kube-api-access-fnfg9\") pod \"coredns-76f75df574-vp9xh\" (UID: \"b96ab59c-8db4-4097-bc26-8b1c7af5e6fb\") " pod="kube-system/coredns-76f75df574-vp9xh" Jan 13 20:38:03.858829 kubelet[2725]: E0113 20:38:03.858787 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:03.870532 kubelet[2725]: I0113 20:38:03.870487 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9cbbj" podStartSLOduration=7.693241572 podStartE2EDuration="18.87044419s" podCreationTimestamp="2025-01-13 20:37:45 +0000 UTC" firstStartedPulling="2025-01-13 20:37:46.422954392 +0000 UTC m=+15.759326879" lastFinishedPulling="2025-01-13 20:37:57.60015701 +0000 UTC m=+26.936529497" observedRunningTime="2025-01-13 20:38:03.869987844 +0000 UTC m=+33.206360341" watchObservedRunningTime="2025-01-13 20:38:03.87044419 +0000 UTC m=+33.206816677" Jan 13 20:38:03.874984 systemd[1]: run-containerd-runc-k8s.io-409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0-runc.iHqTUD.mount: Deactivated successfully. Jan 13 20:38:04.318667 kubelet[2725]: E0113 20:38:04.318631 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:04.319341 containerd[1490]: time="2025-01-13T20:38:04.319294474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l9xbb,Uid:9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:04.323830 kubelet[2725]: E0113 20:38:04.323795 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:04.324233 containerd[1490]: time="2025-01-13T20:38:04.324201833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vp9xh,Uid:b96ab59c-8db4-4097-bc26-8b1c7af5e6fb,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:04.748007 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:40378.service - OpenSSH per-connection server daemon (10.0.0.1:40378). Jan 13 20:38:04.794838 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:04.796595 sshd-session[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:04.800623 systemd-logind[1479]: New session 11 of user core. Jan 13 20:38:04.810950 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:38:04.861237 kubelet[2725]: E0113 20:38:04.860609 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:04.928568 sshd[3523]: Connection closed by 10.0.0.1 port 40378 Jan 13 20:38:04.928914 sshd-session[3521]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:04.933343 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:40378.service: Deactivated successfully. Jan 13 20:38:04.935639 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:38:04.936355 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:38:04.937258 systemd-logind[1479]: Removed session 11. Jan 13 20:38:09.503014 containerd[1490]: time="2025-01-13T20:38:09.502955179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:09.503778 containerd[1490]: time="2025-01-13T20:38:09.503745341Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906617" Jan 13 20:38:09.504969 containerd[1490]: time="2025-01-13T20:38:09.504940794Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:09.506294 containerd[1490]: time="2025-01-13T20:38:09.506247135Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 11.905863188s" Jan 13 20:38:09.506294 containerd[1490]: time="2025-01-13T20:38:09.506292290Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:38:09.508059 containerd[1490]: time="2025-01-13T20:38:09.507955311Z" level=info msg="CreateContainer within sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:38:09.522701 containerd[1490]: time="2025-01-13T20:38:09.522655131Z" level=info msg="CreateContainer within sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\"" Jan 13 20:38:09.525588 containerd[1490]: time="2025-01-13T20:38:09.525526398Z" level=info msg="StartContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\"" Jan 13 20:38:09.559969 systemd[1]: Started cri-containerd-13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4.scope - libcontainer container 13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4. Jan 13 20:38:09.703494 containerd[1490]: time="2025-01-13T20:38:09.703433515Z" level=info msg="StartContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" returns successfully" Jan 13 20:38:09.869219 kubelet[2725]: E0113 20:38:09.869189 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:09.943993 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:40380.service - OpenSSH per-connection server daemon (10.0.0.1:40380). Jan 13 20:38:10.027666 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:10.029897 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:10.035622 systemd-logind[1479]: New session 12 of user core. Jan 13 20:38:10.050063 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:38:10.188457 sshd[3587]: Connection closed by 10.0.0.1 port 40380 Jan 13 20:38:10.189024 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:10.192865 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:40380.service: Deactivated successfully. Jan 13 20:38:10.195008 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:38:10.196033 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:38:10.197264 systemd-logind[1479]: Removed session 12. Jan 13 20:38:10.870587 kubelet[2725]: E0113 20:38:10.870550 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:13.140752 systemd-networkd[1412]: cilium_host: Link UP Jan 13 20:38:13.140946 systemd-networkd[1412]: cilium_net: Link UP Jan 13 20:38:13.141116 systemd-networkd[1412]: cilium_net: Gained carrier Jan 13 20:38:13.141298 systemd-networkd[1412]: cilium_host: Gained carrier Jan 13 20:38:13.259632 systemd-networkd[1412]: cilium_vxlan: Link UP Jan 13 20:38:13.259644 systemd-networkd[1412]: cilium_vxlan: Gained carrier Jan 13 20:38:13.458851 kernel: NET: Registered PF_ALG protocol family Jan 13 20:38:13.838027 systemd-networkd[1412]: cilium_host: Gained IPv6LL Jan 13 20:38:14.094034 systemd-networkd[1412]: cilium_net: Gained IPv6LL Jan 13 20:38:14.227690 systemd-networkd[1412]: lxc_health: Link UP Jan 13 20:38:14.236264 systemd-networkd[1412]: lxc_health: Gained carrier Jan 13 20:38:14.319651 kubelet[2725]: E0113 20:38:14.319590 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:14.345568 kubelet[2725]: I0113 20:38:14.343717 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-wbwxr" podStartSLOduration=5.7166339189999995 podStartE2EDuration="28.343676711s" podCreationTimestamp="2025-01-13 20:37:46 +0000 UTC" firstStartedPulling="2025-01-13 20:37:46.879471815 +0000 UTC m=+16.215844302" lastFinishedPulling="2025-01-13 20:38:09.506514607 +0000 UTC m=+38.842887094" observedRunningTime="2025-01-13 20:38:09.876619593 +0000 UTC m=+39.212992080" watchObservedRunningTime="2025-01-13 20:38:14.343676711 +0000 UTC m=+43.680049208" Jan 13 20:38:14.399181 systemd-networkd[1412]: lxc0c699bf58ca6: Link UP Jan 13 20:38:14.408926 kernel: eth0: renamed from tmpadc5e Jan 13 20:38:14.442553 kernel: eth0: renamed from tmp89435 Jan 13 20:38:14.449539 systemd-networkd[1412]: lxc0c699bf58ca6: Gained carrier Jan 13 20:38:14.450054 systemd-networkd[1412]: lxc52a3e699e210: Link UP Jan 13 20:38:14.451073 systemd-networkd[1412]: lxc52a3e699e210: Gained carrier Jan 13 20:38:14.476986 systemd-networkd[1412]: cilium_vxlan: Gained IPv6LL Jan 13 20:38:14.878043 kubelet[2725]: E0113 20:38:14.878009 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:15.212771 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:44956.service - OpenSSH per-connection server daemon (10.0.0.1:44956). Jan 13 20:38:15.264106 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 44956 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:15.266510 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:15.272282 systemd-logind[1479]: New session 13 of user core. Jan 13 20:38:15.281059 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:38:15.422237 sshd[3975]: Connection closed by 10.0.0.1 port 44956 Jan 13 20:38:15.422622 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:15.427071 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:44956.service: Deactivated successfully. Jan 13 20:38:15.429957 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:38:15.433571 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:38:15.436237 systemd-logind[1479]: Removed session 13. Jan 13 20:38:15.501957 systemd-networkd[1412]: lxc_health: Gained IPv6LL Jan 13 20:38:15.565049 systemd-networkd[1412]: lxc0c699bf58ca6: Gained IPv6LL Jan 13 20:38:16.140984 systemd-networkd[1412]: lxc52a3e699e210: Gained IPv6LL Jan 13 20:38:17.976774 containerd[1490]: time="2025-01-13T20:38:17.976644950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:17.976774 containerd[1490]: time="2025-01-13T20:38:17.976714964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:17.976774 containerd[1490]: time="2025-01-13T20:38:17.976727268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.977289 containerd[1490]: time="2025-01-13T20:38:17.976724282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:17.977289 containerd[1490]: time="2025-01-13T20:38:17.976795278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:17.977289 containerd[1490]: time="2025-01-13T20:38:17.976809757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.977289 containerd[1490]: time="2025-01-13T20:38:17.976861586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.977289 containerd[1490]: time="2025-01-13T20:38:17.977051893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:18.002984 systemd[1]: Started cri-containerd-8943531edb3e217632c793a6637ba72e9e60141aec1decdec24044f450276530.scope - libcontainer container 8943531edb3e217632c793a6637ba72e9e60141aec1decdec24044f450276530. Jan 13 20:38:18.010645 systemd[1]: Started cri-containerd-adc5e9e03c1c35e093b4eaea58abd40c5c09c6689242b2ff0c3e41baddf3ef53.scope - libcontainer container adc5e9e03c1c35e093b4eaea58abd40c5c09c6689242b2ff0c3e41baddf3ef53. Jan 13 20:38:18.017592 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:38:18.024735 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:38:18.044494 containerd[1490]: time="2025-01-13T20:38:18.044433201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vp9xh,Uid:b96ab59c-8db4-4097-bc26-8b1c7af5e6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8943531edb3e217632c793a6637ba72e9e60141aec1decdec24044f450276530\"" Jan 13 20:38:18.048279 kubelet[2725]: E0113 20:38:18.048248 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:18.053341 containerd[1490]: time="2025-01-13T20:38:18.053291513Z" level=info msg="CreateContainer within sandbox \"8943531edb3e217632c793a6637ba72e9e60141aec1decdec24044f450276530\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:38:18.066474 containerd[1490]: time="2025-01-13T20:38:18.066424838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l9xbb,Uid:9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"adc5e9e03c1c35e093b4eaea58abd40c5c09c6689242b2ff0c3e41baddf3ef53\"" Jan 13 20:38:18.067146 kubelet[2725]: E0113 20:38:18.067120 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:18.068959 containerd[1490]: time="2025-01-13T20:38:18.068909235Z" level=info msg="CreateContainer within sandbox \"adc5e9e03c1c35e093b4eaea58abd40c5c09c6689242b2ff0c3e41baddf3ef53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:38:18.083105 containerd[1490]: time="2025-01-13T20:38:18.083057270Z" level=info msg="CreateContainer within sandbox \"8943531edb3e217632c793a6637ba72e9e60141aec1decdec24044f450276530\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"870136a65e1dc202f0c29af20c1873aed45655cfa3c2fd39037d1af9ff442557\"" Jan 13 20:38:18.083597 containerd[1490]: time="2025-01-13T20:38:18.083557652Z" level=info msg="StartContainer for \"870136a65e1dc202f0c29af20c1873aed45655cfa3c2fd39037d1af9ff442557\"" Jan 13 20:38:18.086887 containerd[1490]: time="2025-01-13T20:38:18.086849121Z" level=info msg="CreateContainer within sandbox \"adc5e9e03c1c35e093b4eaea58abd40c5c09c6689242b2ff0c3e41baddf3ef53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54e76e18161d655a2aa63a166d5fda316416133259cf04309053bce7a6a27ca5\"" Jan 13 20:38:18.087854 containerd[1490]: time="2025-01-13T20:38:18.087751316Z" level=info msg="StartContainer for \"54e76e18161d655a2aa63a166d5fda316416133259cf04309053bce7a6a27ca5\"" Jan 13 20:38:18.114020 systemd[1]: Started cri-containerd-870136a65e1dc202f0c29af20c1873aed45655cfa3c2fd39037d1af9ff442557.scope - libcontainer container 870136a65e1dc202f0c29af20c1873aed45655cfa3c2fd39037d1af9ff442557. Jan 13 20:38:18.117318 systemd[1]: Started cri-containerd-54e76e18161d655a2aa63a166d5fda316416133259cf04309053bce7a6a27ca5.scope - libcontainer container 54e76e18161d655a2aa63a166d5fda316416133259cf04309053bce7a6a27ca5. Jan 13 20:38:18.151298 containerd[1490]: time="2025-01-13T20:38:18.151247823Z" level=info msg="StartContainer for \"870136a65e1dc202f0c29af20c1873aed45655cfa3c2fd39037d1af9ff442557\" returns successfully" Jan 13 20:38:18.151298 containerd[1490]: time="2025-01-13T20:38:18.151288671Z" level=info msg="StartContainer for \"54e76e18161d655a2aa63a166d5fda316416133259cf04309053bce7a6a27ca5\" returns successfully" Jan 13 20:38:18.887866 kubelet[2725]: E0113 20:38:18.886885 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:18.890304 kubelet[2725]: E0113 20:38:18.890264 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:18.901463 kubelet[2725]: I0113 20:38:18.901206 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l9xbb" podStartSLOduration=32.9011522 podStartE2EDuration="32.9011522s" podCreationTimestamp="2025-01-13 20:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:18.899713204 +0000 UTC m=+48.236085701" watchObservedRunningTime="2025-01-13 20:38:18.9011522 +0000 UTC m=+48.237524697" Jan 13 20:38:18.927608 kubelet[2725]: I0113 20:38:18.926456 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vp9xh" podStartSLOduration=32.926404849 podStartE2EDuration="32.926404849s" podCreationTimestamp="2025-01-13 20:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:18.925934393 +0000 UTC m=+48.262306880" watchObservedRunningTime="2025-01-13 20:38:18.926404849 +0000 UTC m=+48.262777346" Jan 13 20:38:19.892276 kubelet[2725]: E0113 20:38:19.892190 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:19.892735 kubelet[2725]: E0113 20:38:19.892348 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:20.439383 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:44962.service - OpenSSH per-connection server daemon (10.0.0.1:44962). Jan 13 20:38:20.494048 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 44962 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:20.496184 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:20.501233 systemd-logind[1479]: New session 14 of user core. Jan 13 20:38:20.511176 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:38:20.626226 sshd[4169]: Connection closed by 10.0.0.1 port 44962 Jan 13 20:38:20.626578 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:20.630587 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:44962.service: Deactivated successfully. Jan 13 20:38:20.632588 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:38:20.633450 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:38:20.634313 systemd-logind[1479]: Removed session 14. Jan 13 20:38:20.893575 kubelet[2725]: E0113 20:38:20.893532 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:20.894041 kubelet[2725]: E0113 20:38:20.893702 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:25.656911 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:58696.service - OpenSSH per-connection server daemon (10.0.0.1:58696). Jan 13 20:38:25.776532 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 58696 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:25.782158 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:25.800992 systemd-logind[1479]: New session 15 of user core. Jan 13 20:38:25.814658 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:38:26.057969 sshd[4189]: Connection closed by 10.0.0.1 port 58696 Jan 13 20:38:26.060124 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:26.066067 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:58696.service: Deactivated successfully. Jan 13 20:38:26.071880 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:38:26.080772 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:38:26.100209 systemd-logind[1479]: Removed session 15. Jan 13 20:38:31.090720 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:58698.service - OpenSSH per-connection server daemon (10.0.0.1:58698). Jan 13 20:38:31.178841 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 58698 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:31.179722 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:31.189033 systemd-logind[1479]: New session 16 of user core. Jan 13 20:38:31.204152 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:38:31.433232 sshd[4206]: Connection closed by 10.0.0.1 port 58698 Jan 13 20:38:31.433054 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:31.449039 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:58698.service: Deactivated successfully. Jan 13 20:38:31.453744 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:38:31.457893 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:38:31.482454 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Jan 13 20:38:31.487152 systemd-logind[1479]: Removed session 16. Jan 13 20:38:31.541142 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:31.543686 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:31.571139 systemd-logind[1479]: New session 17 of user core. Jan 13 20:38:31.586229 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:38:31.898275 sshd[4222]: Connection closed by 10.0.0.1 port 58704 Jan 13 20:38:31.901886 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:31.931069 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:58704.service: Deactivated successfully. Jan 13 20:38:31.936161 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:38:31.942884 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:38:31.965076 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:58714.service - OpenSSH per-connection server daemon (10.0.0.1:58714). Jan 13 20:38:31.967645 systemd-logind[1479]: Removed session 17. Jan 13 20:38:32.035863 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 58714 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:32.039027 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:32.066678 systemd-logind[1479]: New session 18 of user core. Jan 13 20:38:32.078164 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:38:32.306122 sshd[4234]: Connection closed by 10.0.0.1 port 58714 Jan 13 20:38:32.306575 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:32.320259 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:58714.service: Deactivated successfully. Jan 13 20:38:32.323041 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:38:32.330055 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:38:32.335331 systemd-logind[1479]: Removed session 18. Jan 13 20:38:37.400625 kernel: hrtimer: interrupt took 10157613 ns Jan 13 20:38:37.397463 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:52520.service - OpenSSH per-connection server daemon (10.0.0.1:52520). Jan 13 20:38:37.483363 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 52520 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:37.484212 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:37.500343 systemd-logind[1479]: New session 19 of user core. Jan 13 20:38:37.519165 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:38:37.737903 sshd[4248]: Connection closed by 10.0.0.1 port 52520 Jan 13 20:38:37.736431 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:37.751405 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:52520.service: Deactivated successfully. Jan 13 20:38:37.770835 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:38:37.775156 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:38:37.778961 systemd-logind[1479]: Removed session 19. Jan 13 20:38:42.797936 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:52530.service - OpenSSH per-connection server daemon (10.0.0.1:52530). Jan 13 20:38:42.880477 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:42.884264 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:42.906024 systemd-logind[1479]: New session 20 of user core. Jan 13 20:38:42.911170 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:38:43.145372 sshd[4262]: Connection closed by 10.0.0.1 port 52530 Jan 13 20:38:43.144501 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:43.155973 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:52530.service: Deactivated successfully. Jan 13 20:38:43.158944 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:38:43.169275 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:38:43.170580 systemd-logind[1479]: Removed session 20. Jan 13 20:38:46.770664 kubelet[2725]: E0113 20:38:46.770204 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:48.185224 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:54116.service - OpenSSH per-connection server daemon (10.0.0.1:54116). Jan 13 20:38:48.332557 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 54116 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:48.335420 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:48.352367 systemd-logind[1479]: New session 21 of user core. Jan 13 20:38:48.362144 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:38:48.598667 sshd[4279]: Connection closed by 10.0.0.1 port 54116 Jan 13 20:38:48.601454 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:48.610207 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:54116.service: Deactivated successfully. Jan 13 20:38:48.616261 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:38:48.618147 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:38:48.627026 systemd-logind[1479]: Removed session 21. Jan 13 20:38:53.645492 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:54132.service - OpenSSH per-connection server daemon (10.0.0.1:54132). Jan 13 20:38:53.765877 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 54132 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:53.766889 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:53.775096 systemd-logind[1479]: New session 22 of user core. Jan 13 20:38:53.796795 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:38:54.027246 sshd[4294]: Connection closed by 10.0.0.1 port 54132 Jan 13 20:38:54.028298 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:54.040477 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:54132.service: Deactivated successfully. Jan 13 20:38:54.046465 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:38:54.050085 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:38:54.060101 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:54134.service - OpenSSH per-connection server daemon (10.0.0.1:54134). Jan 13 20:38:54.062395 systemd-logind[1479]: Removed session 22. Jan 13 20:38:54.128042 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 54134 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:54.129767 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:54.173398 systemd-logind[1479]: New session 23 of user core. Jan 13 20:38:54.181195 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:38:54.778669 kubelet[2725]: E0113 20:38:54.774142 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:54.925422 sshd[4309]: Connection closed by 10.0.0.1 port 54134 Jan 13 20:38:54.924287 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:54.940042 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:54134.service: Deactivated successfully. Jan 13 20:38:54.945742 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:38:54.948427 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:38:54.977303 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Jan 13 20:38:54.996484 systemd-logind[1479]: Removed session 23. Jan 13 20:38:55.069453 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:55.072575 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:55.097168 systemd-logind[1479]: New session 24 of user core. Jan 13 20:38:55.120801 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:38:57.647665 sshd[4321]: Connection closed by 10.0.0.1 port 47380 Jan 13 20:38:57.648971 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:57.664677 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:47380.service: Deactivated successfully. Jan 13 20:38:57.676877 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:38:57.697672 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:38:57.719931 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). Jan 13 20:38:57.722169 systemd-logind[1479]: Removed session 24. Jan 13 20:38:57.798673 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:57.798994 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:57.822178 systemd-logind[1479]: New session 25 of user core. Jan 13 20:38:57.838149 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:38:58.349939 sshd[4340]: Connection closed by 10.0.0.1 port 47394 Jan 13 20:38:58.344839 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:58.381466 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:47394.service: Deactivated successfully. Jan 13 20:38:58.393710 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:38:58.397105 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:38:58.427881 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:47396.service - OpenSSH per-connection server daemon (10.0.0.1:47396). Jan 13 20:38:58.442990 systemd-logind[1479]: Removed session 25. Jan 13 20:38:58.507355 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 47396 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:38:58.510890 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:58.529893 systemd-logind[1479]: New session 26 of user core. Jan 13 20:38:58.550762 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:38:58.780141 sshd[4352]: Connection closed by 10.0.0.1 port 47396 Jan 13 20:38:58.783194 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:58.791837 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:47396.service: Deactivated successfully. Jan 13 20:38:58.794965 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:38:58.796917 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:38:58.809387 systemd-logind[1479]: Removed session 26. Jan 13 20:39:03.810346 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:47406.service - OpenSSH per-connection server daemon (10.0.0.1:47406). Jan 13 20:39:03.880540 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:03.882885 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:03.905098 systemd-logind[1479]: New session 27 of user core. Jan 13 20:39:03.917222 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:39:03.959607 update_engine[1481]: I20250113 20:39:03.958206 1481 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:39:03.959607 update_engine[1481]: I20250113 20:39:03.958277 1481 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:39:03.959607 update_engine[1481]: I20250113 20:39:03.958557 1481 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:39:03.959607 update_engine[1481]: I20250113 20:39:03.959259 1481 omaha_request_params.cc:62] Current group set to stable Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.967084 1481 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.973493 1481 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.981271 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.981809 1481 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.984086 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.984102 1481 omaha_request_action.cc:272] Request: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: Jan 13 20:39:03.985007 update_engine[1481]: I20250113 20:39:03.984112 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:39:03.985565 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:39:03.998319 update_engine[1481]: I20250113 20:39:03.997785 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:39:03.998319 update_engine[1481]: I20250113 20:39:03.998253 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:39:04.011540 update_engine[1481]: E20250113 20:39:04.011329 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:39:04.011540 update_engine[1481]: I20250113 20:39:04.011490 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:39:04.121320 sshd[4366]: Connection closed by 10.0.0.1 port 47406 Jan 13 20:39:04.121575 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:04.125882 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:47406.service: Deactivated successfully. Jan 13 20:39:04.130804 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:39:04.135598 systemd-logind[1479]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:39:04.137656 systemd-logind[1479]: Removed session 27. Jan 13 20:39:06.780922 kubelet[2725]: E0113 20:39:06.772467 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:09.155427 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:39606.service - OpenSSH per-connection server daemon (10.0.0.1:39606). Jan 13 20:39:09.250553 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 39606 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:09.258165 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:09.282214 systemd-logind[1479]: New session 28 of user core. Jan 13 20:39:09.305401 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:39:09.599313 sshd[4381]: Connection closed by 10.0.0.1 port 39606 Jan 13 20:39:09.600065 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:09.614334 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:39606.service: Deactivated successfully. Jan 13 20:39:09.618922 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:39:09.629891 systemd-logind[1479]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:39:09.634377 systemd-logind[1479]: Removed session 28. Jan 13 20:39:11.778481 kubelet[2725]: E0113 20:39:11.778379 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:13.963668 update_engine[1481]: I20250113 20:39:13.962754 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:39:13.963668 update_engine[1481]: I20250113 20:39:13.963232 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:39:13.963668 update_engine[1481]: I20250113 20:39:13.963541 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:39:13.974793 update_engine[1481]: E20250113 20:39:13.973647 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:39:13.974793 update_engine[1481]: I20250113 20:39:13.974535 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:39:14.639279 systemd[1]: Started sshd@28-10.0.0.79:22-10.0.0.1:39616.service - OpenSSH per-connection server daemon (10.0.0.1:39616). Jan 13 20:39:14.725102 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 39616 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:14.727681 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:14.739710 systemd-logind[1479]: New session 29 of user core. Jan 13 20:39:14.748140 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:39:14.975932 sshd[4401]: Connection closed by 10.0.0.1 port 39616 Jan 13 20:39:14.978111 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:14.986465 systemd[1]: sshd@28-10.0.0.79:22-10.0.0.1:39616.service: Deactivated successfully. Jan 13 20:39:14.995718 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:39:15.002416 systemd-logind[1479]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:39:15.007394 systemd-logind[1479]: Removed session 29. Jan 13 20:39:15.770135 kubelet[2725]: E0113 20:39:15.769642 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:15.770135 kubelet[2725]: E0113 20:39:15.769995 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:20.020683 systemd[1]: Started sshd@29-10.0.0.79:22-10.0.0.1:39090.service - OpenSSH per-connection server daemon (10.0.0.1:39090). Jan 13 20:39:20.098290 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 39090 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:20.100620 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:20.113779 systemd-logind[1479]: New session 30 of user core. Jan 13 20:39:20.121168 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:39:20.335373 sshd[4418]: Connection closed by 10.0.0.1 port 39090 Jan 13 20:39:20.338571 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:20.344351 systemd[1]: sshd@29-10.0.0.79:22-10.0.0.1:39090.service: Deactivated successfully. Jan 13 20:39:20.350135 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:39:20.357184 systemd-logind[1479]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:39:20.375118 systemd-logind[1479]: Removed session 30. Jan 13 20:39:23.961202 update_engine[1481]: I20250113 20:39:23.960331 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:39:23.961202 update_engine[1481]: I20250113 20:39:23.960777 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:39:23.961202 update_engine[1481]: I20250113 20:39:23.961112 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:39:23.967581 update_engine[1481]: E20250113 20:39:23.967481 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:39:23.967738 update_engine[1481]: I20250113 20:39:23.967608 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:39:25.353758 systemd[1]: Started sshd@30-10.0.0.79:22-10.0.0.1:39472.service - OpenSSH per-connection server daemon (10.0.0.1:39472). Jan 13 20:39:25.422441 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 39472 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:25.424794 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:25.442120 systemd-logind[1479]: New session 31 of user core. Jan 13 20:39:25.449946 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 20:39:25.631361 sshd[4432]: Connection closed by 10.0.0.1 port 39472 Jan 13 20:39:25.631389 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:25.652321 systemd[1]: sshd@30-10.0.0.79:22-10.0.0.1:39472.service: Deactivated successfully. Jan 13 20:39:25.656106 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 20:39:25.658987 systemd-logind[1479]: Session 31 logged out. Waiting for processes to exit. Jan 13 20:39:25.674469 systemd[1]: Started sshd@31-10.0.0.79:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Jan 13 20:39:25.676477 systemd-logind[1479]: Removed session 31. Jan 13 20:39:25.729056 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:25.731388 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:25.737476 systemd-logind[1479]: New session 32 of user core. Jan 13 20:39:25.754323 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 13 20:39:25.769372 kubelet[2725]: E0113 20:39:25.769298 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:27.345435 containerd[1490]: time="2025-01-13T20:39:27.345391257Z" level=info msg="StopContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" with timeout 30 (s)" Jan 13 20:39:27.357432 containerd[1490]: time="2025-01-13T20:39:27.357380522Z" level=info msg="Stop container \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" with signal terminated" Jan 13 20:39:27.370504 systemd[1]: cri-containerd-13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4.scope: Deactivated successfully. Jan 13 20:39:27.385511 containerd[1490]: time="2025-01-13T20:39:27.385453409Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:39:27.398432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4-rootfs.mount: Deactivated successfully. Jan 13 20:39:27.413105 containerd[1490]: time="2025-01-13T20:39:27.413043052Z" level=info msg="StopContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" with timeout 2 (s)" Jan 13 20:39:27.413619 containerd[1490]: time="2025-01-13T20:39:27.413501136Z" level=info msg="Stop container \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" with signal terminated" Jan 13 20:39:27.413619 containerd[1490]: time="2025-01-13T20:39:27.413542605Z" level=info msg="shim disconnected" id=13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4 namespace=k8s.io Jan 13 20:39:27.413619 containerd[1490]: time="2025-01-13T20:39:27.413585135Z" level=warning msg="cleaning up after shim disconnected" id=13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4 namespace=k8s.io Jan 13 20:39:27.413619 containerd[1490]: time="2025-01-13T20:39:27.413596787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:27.426194 systemd-networkd[1412]: lxc_health: Link DOWN Jan 13 20:39:27.426208 systemd-networkd[1412]: lxc_health: Lost carrier Jan 13 20:39:27.434668 containerd[1490]: time="2025-01-13T20:39:27.434605577Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:39:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:39:27.441252 containerd[1490]: time="2025-01-13T20:39:27.441198975Z" level=info msg="StopContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" returns successfully" Jan 13 20:39:27.449036 containerd[1490]: time="2025-01-13T20:39:27.448783914Z" level=info msg="StopPodSandbox for \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\"" Jan 13 20:39:27.454472 systemd[1]: cri-containerd-409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0.scope: Deactivated successfully. Jan 13 20:39:27.454935 systemd[1]: cri-containerd-409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0.scope: Consumed 8.007s CPU time. Jan 13 20:39:27.467658 containerd[1490]: time="2025-01-13T20:39:27.450754090Z" level=info msg="Container to stop \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.470129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2-shm.mount: Deactivated successfully. Jan 13 20:39:27.478650 systemd[1]: cri-containerd-4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2.scope: Deactivated successfully. Jan 13 20:39:27.486605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0-rootfs.mount: Deactivated successfully. Jan 13 20:39:27.492927 containerd[1490]: time="2025-01-13T20:39:27.492838016Z" level=info msg="shim disconnected" id=409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0 namespace=k8s.io Jan 13 20:39:27.492927 containerd[1490]: time="2025-01-13T20:39:27.492900925Z" level=warning msg="cleaning up after shim disconnected" id=409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0 namespace=k8s.io Jan 13 20:39:27.492927 containerd[1490]: time="2025-01-13T20:39:27.492911314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:27.505280 containerd[1490]: time="2025-01-13T20:39:27.505121216Z" level=info msg="shim disconnected" id=4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2 namespace=k8s.io Jan 13 20:39:27.505280 containerd[1490]: time="2025-01-13T20:39:27.505174657Z" level=warning msg="cleaning up after shim disconnected" id=4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2 namespace=k8s.io Jan 13 20:39:27.505280 containerd[1490]: time="2025-01-13T20:39:27.505182752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:27.506391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2-rootfs.mount: Deactivated successfully. Jan 13 20:39:27.520106 containerd[1490]: time="2025-01-13T20:39:27.520059986Z" level=info msg="StopContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" returns successfully" Jan 13 20:39:27.520701 containerd[1490]: time="2025-01-13T20:39:27.520648046Z" level=info msg="StopPodSandbox for \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\"" Jan 13 20:39:27.520787 containerd[1490]: time="2025-01-13T20:39:27.520712768Z" level=info msg="Container to stop \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.520787 containerd[1490]: time="2025-01-13T20:39:27.520758855Z" level=info msg="Container to stop \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.520787 containerd[1490]: time="2025-01-13T20:39:27.520772901Z" level=info msg="Container to stop \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.520787 containerd[1490]: time="2025-01-13T20:39:27.520786066Z" level=info msg="Container to stop \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.521066 containerd[1490]: time="2025-01-13T20:39:27.520799541Z" level=info msg="Container to stop \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:27.522465 containerd[1490]: time="2025-01-13T20:39:27.522428906Z" level=info msg="TearDown network for sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" successfully" Jan 13 20:39:27.522465 containerd[1490]: time="2025-01-13T20:39:27.522450115Z" level=info msg="StopPodSandbox for \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" returns successfully" Jan 13 20:39:27.528765 systemd[1]: cri-containerd-3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75.scope: Deactivated successfully. Jan 13 20:39:27.559210 containerd[1490]: time="2025-01-13T20:39:27.558965928Z" level=info msg="shim disconnected" id=3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75 namespace=k8s.io Jan 13 20:39:27.559210 containerd[1490]: time="2025-01-13T20:39:27.559030630Z" level=warning msg="cleaning up after shim disconnected" id=3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75 namespace=k8s.io Jan 13 20:39:27.559210 containerd[1490]: time="2025-01-13T20:39:27.559042232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:27.575242 containerd[1490]: time="2025-01-13T20:39:27.575187939Z" level=info msg="TearDown network for sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" successfully" Jan 13 20:39:27.575242 containerd[1490]: time="2025-01-13T20:39:27.575223926Z" level=info msg="StopPodSandbox for \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" returns successfully" Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652696 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hubble-tls\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652774 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-bpf-maps\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652807 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-lib-modules\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652949 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-etc-cni-netd\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652944 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.654082 kubelet[2725]: I0113 20:39:27.652988 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24pxl\" (UniqueName: \"kubernetes.io/projected/6e955397-8494-4322-a39c-650b18477d62-kube-api-access-24pxl\") pod \"6e955397-8494-4322-a39c-650b18477d62\" (UID: \"6e955397-8494-4322-a39c-650b18477d62\") " Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653025 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e955397-8494-4322-a39c-650b18477d62-cilium-config-path\") pod \"6e955397-8494-4322-a39c-650b18477d62\" (UID: \"6e955397-8494-4322-a39c-650b18477d62\") " Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653019 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653056 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhdsj\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653085 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-xtables-lock\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653125 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cni-path\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654730 kubelet[2725]: I0113 20:39:27.653153 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-cgroup\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653183 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-config-path\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653212 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-run\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653246 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-clustermesh-secrets\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653277 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-kernel\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653304 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hostproc\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.654962 kubelet[2725]: I0113 20:39:27.653332 2725 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-net\") pod \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\" (UID: \"ad36e6ba-c8b3-45da-a1e8-258425c0c1c7\") " Jan 13 20:39:27.655181 kubelet[2725]: I0113 20:39:27.653377 2725 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.655181 kubelet[2725]: I0113 20:39:27.653396 2725 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.655181 kubelet[2725]: I0113 20:39:27.653447 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.655181 kubelet[2725]: I0113 20:39:27.654863 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.656094 kubelet[2725]: I0113 20:39:27.656035 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.657076 kubelet[2725]: I0113 20:39:27.656884 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.657076 kubelet[2725]: I0113 20:39:27.656933 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.657076 kubelet[2725]: I0113 20:39:27.656962 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.657546 kubelet[2725]: I0113 20:39:27.657512 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:27.659869 kubelet[2725]: I0113 20:39:27.657891 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.659869 kubelet[2725]: I0113 20:39:27.657946 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:27.659869 kubelet[2725]: I0113 20:39:27.658255 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e955397-8494-4322-a39c-650b18477d62-kube-api-access-24pxl" (OuterVolumeSpecName: "kube-api-access-24pxl") pod "6e955397-8494-4322-a39c-650b18477d62" (UID: "6e955397-8494-4322-a39c-650b18477d62"). InnerVolumeSpecName "kube-api-access-24pxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:27.660018 kubelet[2725]: I0113 20:39:27.659961 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:39:27.660408 kubelet[2725]: I0113 20:39:27.660367 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:39:27.660762 kubelet[2725]: I0113 20:39:27.660725 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj" (OuterVolumeSpecName: "kube-api-access-dhdsj") pod "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" (UID: "ad36e6ba-c8b3-45da-a1e8-258425c0c1c7"). InnerVolumeSpecName "kube-api-access-dhdsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:27.661096 kubelet[2725]: I0113 20:39:27.661062 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e955397-8494-4322-a39c-650b18477d62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e955397-8494-4322-a39c-650b18477d62" (UID: "6e955397-8494-4322-a39c-650b18477d62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:39:27.754289 kubelet[2725]: I0113 20:39:27.754249 2725 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754289 kubelet[2725]: I0113 20:39:27.754286 2725 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754289 kubelet[2725]: I0113 20:39:27.754297 2725 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754308 2725 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754317 2725 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754333 2725 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754342 2725 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754352 2725 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754365 2725 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754379 2725 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-24pxl\" (UniqueName: \"kubernetes.io/projected/6e955397-8494-4322-a39c-650b18477d62-kube-api-access-24pxl\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754523 kubelet[2725]: I0113 20:39:27.754390 2725 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754873 kubelet[2725]: I0113 20:39:27.754399 2725 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e955397-8494-4322-a39c-650b18477d62-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754873 kubelet[2725]: I0113 20:39:27.754408 2725 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dhdsj\" (UniqueName: \"kubernetes.io/projected/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-kube-api-access-dhdsj\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:27.754873 kubelet[2725]: I0113 20:39:27.754419 2725 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:28.264284 kubelet[2725]: I0113 20:39:28.264246 2725 scope.go:117] "RemoveContainer" containerID="409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0" Jan 13 20:39:28.272305 systemd[1]: Removed slice kubepods-burstable-podad36e6ba_c8b3_45da_a1e8_258425c0c1c7.slice - libcontainer container kubepods-burstable-podad36e6ba_c8b3_45da_a1e8_258425c0c1c7.slice. Jan 13 20:39:28.272430 systemd[1]: kubepods-burstable-podad36e6ba_c8b3_45da_a1e8_258425c0c1c7.slice: Consumed 8.109s CPU time. Jan 13 20:39:28.275796 containerd[1490]: time="2025-01-13T20:39:28.275469491Z" level=info msg="RemoveContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\"" Jan 13 20:39:28.275745 systemd[1]: Removed slice kubepods-besteffort-pod6e955397_8494_4322_a39c_650b18477d62.slice - libcontainer container kubepods-besteffort-pod6e955397_8494_4322_a39c_650b18477d62.slice. Jan 13 20:39:28.281146 containerd[1490]: time="2025-01-13T20:39:28.281079883Z" level=info msg="RemoveContainer for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" returns successfully" Jan 13 20:39:28.281516 kubelet[2725]: I0113 20:39:28.281416 2725 scope.go:117] "RemoveContainer" containerID="c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931" Jan 13 20:39:28.282643 containerd[1490]: time="2025-01-13T20:39:28.282610321Z" level=info msg="RemoveContainer for \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\"" Jan 13 20:39:28.287327 containerd[1490]: time="2025-01-13T20:39:28.287263287Z" level=info msg="RemoveContainer for \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\" returns successfully" Jan 13 20:39:28.287942 kubelet[2725]: I0113 20:39:28.287905 2725 scope.go:117] "RemoveContainer" containerID="750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f" Jan 13 20:39:28.292408 containerd[1490]: time="2025-01-13T20:39:28.292352046Z" level=info msg="RemoveContainer for \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\"" Jan 13 20:39:28.298185 containerd[1490]: time="2025-01-13T20:39:28.298135124Z" level=info msg="RemoveContainer for \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\" returns successfully" Jan 13 20:39:28.298450 kubelet[2725]: I0113 20:39:28.298418 2725 scope.go:117] "RemoveContainer" containerID="e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320" Jan 13 20:39:28.299594 containerd[1490]: time="2025-01-13T20:39:28.299548661Z" level=info msg="RemoveContainer for \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\"" Jan 13 20:39:28.303352 containerd[1490]: time="2025-01-13T20:39:28.303309054Z" level=info msg="RemoveContainer for \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\" returns successfully" Jan 13 20:39:28.303506 kubelet[2725]: I0113 20:39:28.303477 2725 scope.go:117] "RemoveContainer" containerID="24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1" Jan 13 20:39:28.304253 containerd[1490]: time="2025-01-13T20:39:28.304231544Z" level=info msg="RemoveContainer for \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\"" Jan 13 20:39:28.307716 containerd[1490]: time="2025-01-13T20:39:28.307683334Z" level=info msg="RemoveContainer for \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\" returns successfully" Jan 13 20:39:28.307944 kubelet[2725]: I0113 20:39:28.307901 2725 scope.go:117] "RemoveContainer" containerID="409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0" Jan 13 20:39:28.308087 containerd[1490]: time="2025-01-13T20:39:28.308052630Z" level=error msg="ContainerStatus for \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\": not found" Jan 13 20:39:28.316632 kubelet[2725]: E0113 20:39:28.316591 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\": not found" containerID="409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0" Jan 13 20:39:28.316739 kubelet[2725]: I0113 20:39:28.316723 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0"} err="failed to get container status \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\": rpc error: code = NotFound desc = an error occurred when try to find container \"409cce0db246e32ca6ec43daf280fa45f6f7d9986f86ad92c24e7d083e628ad0\": not found" Jan 13 20:39:28.316780 kubelet[2725]: I0113 20:39:28.316746 2725 scope.go:117] "RemoveContainer" containerID="c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931" Jan 13 20:39:28.317077 containerd[1490]: time="2025-01-13T20:39:28.317028822Z" level=error msg="ContainerStatus for \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\": not found" Jan 13 20:39:28.317296 kubelet[2725]: E0113 20:39:28.317183 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\": not found" containerID="c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931" Jan 13 20:39:28.317296 kubelet[2725]: I0113 20:39:28.317218 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931"} err="failed to get container status \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\": rpc error: code = NotFound desc = an error occurred when try to find container \"c42d5587587f16181b192835d69ccbf4b5f789ffdba41c0e6897db7affcfd931\": not found" Jan 13 20:39:28.317296 kubelet[2725]: I0113 20:39:28.317230 2725 scope.go:117] "RemoveContainer" containerID="750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f" Jan 13 20:39:28.317411 containerd[1490]: time="2025-01-13T20:39:28.317372360Z" level=error msg="ContainerStatus for \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\": not found" Jan 13 20:39:28.317515 kubelet[2725]: E0113 20:39:28.317484 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\": not found" containerID="750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f" Jan 13 20:39:28.317585 kubelet[2725]: I0113 20:39:28.317518 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f"} err="failed to get container status \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\": rpc error: code = NotFound desc = an error occurred when try to find container \"750694bc57068c0bc941d16ca6cd7d85fa8003c0b1d9d1c6482b2814456cd29f\": not found" Jan 13 20:39:28.317585 kubelet[2725]: I0113 20:39:28.317531 2725 scope.go:117] "RemoveContainer" containerID="e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320" Jan 13 20:39:28.317661 containerd[1490]: time="2025-01-13T20:39:28.317646908Z" level=error msg="ContainerStatus for \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\": not found" Jan 13 20:39:28.317768 kubelet[2725]: E0113 20:39:28.317747 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\": not found" containerID="e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320" Jan 13 20:39:28.317828 kubelet[2725]: I0113 20:39:28.317775 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320"} err="failed to get container status \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\": rpc error: code = NotFound desc = an error occurred when try to find container \"e208d508980d6cacc6edf6901532a3aa7332955c17b2d81df644302d5246b320\": not found" Jan 13 20:39:28.317828 kubelet[2725]: I0113 20:39:28.317786 2725 scope.go:117] "RemoveContainer" containerID="24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1" Jan 13 20:39:28.317952 containerd[1490]: time="2025-01-13T20:39:28.317918971Z" level=error msg="ContainerStatus for \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\": not found" Jan 13 20:39:28.318054 kubelet[2725]: E0113 20:39:28.318037 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\": not found" containerID="24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1" Jan 13 20:39:28.318129 kubelet[2725]: I0113 20:39:28.318061 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1"} err="failed to get container status \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"24e23a99580233c69706f2c7d26de702cfb17aeff2b194b2d0d30d579aae74c1\": not found" Jan 13 20:39:28.318129 kubelet[2725]: I0113 20:39:28.318071 2725 scope.go:117] "RemoveContainer" containerID="13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4" Jan 13 20:39:28.319312 containerd[1490]: time="2025-01-13T20:39:28.319289246Z" level=info msg="RemoveContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\"" Jan 13 20:39:28.323737 containerd[1490]: time="2025-01-13T20:39:28.323712018Z" level=info msg="RemoveContainer for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" returns successfully" Jan 13 20:39:28.324015 kubelet[2725]: I0113 20:39:28.323998 2725 scope.go:117] "RemoveContainer" containerID="13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4" Jan 13 20:39:28.324306 containerd[1490]: time="2025-01-13T20:39:28.324259561Z" level=error msg="ContainerStatus for \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\": not found" Jan 13 20:39:28.324521 kubelet[2725]: E0113 20:39:28.324486 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\": not found" containerID="13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4" Jan 13 20:39:28.324564 kubelet[2725]: I0113 20:39:28.324539 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4"} err="failed to get container status \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"13ec0a94c59530c981bf599f8b6a40c75960a4fc0f517c7bbba4e998baeb6aa4\": not found" Jan 13 20:39:28.356971 systemd[1]: var-lib-kubelet-pods-6e955397\x2d8494\x2d4322\x2da39c\x2d650b18477d62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24pxl.mount: Deactivated successfully. Jan 13 20:39:28.357121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75-rootfs.mount: Deactivated successfully. Jan 13 20:39:28.357215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75-shm.mount: Deactivated successfully. Jan 13 20:39:28.357304 systemd[1]: var-lib-kubelet-pods-ad36e6ba\x2dc8b3\x2d45da\x2da1e8\x2d258425c0c1c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhdsj.mount: Deactivated successfully. Jan 13 20:39:28.357404 systemd[1]: var-lib-kubelet-pods-ad36e6ba\x2dc8b3\x2d45da\x2da1e8\x2d258425c0c1c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:39:28.357497 systemd[1]: var-lib-kubelet-pods-ad36e6ba\x2dc8b3\x2d45da\x2da1e8\x2d258425c0c1c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:39:28.772217 kubelet[2725]: I0113 20:39:28.772161 2725 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6e955397-8494-4322-a39c-650b18477d62" path="/var/lib/kubelet/pods/6e955397-8494-4322-a39c-650b18477d62/volumes" Jan 13 20:39:28.772867 kubelet[2725]: I0113 20:39:28.772804 2725 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" path="/var/lib/kubelet/pods/ad36e6ba-c8b3-45da-a1e8-258425c0c1c7/volumes" Jan 13 20:39:29.301793 sshd[4446]: Connection closed by 10.0.0.1 port 39474 Jan 13 20:39:29.302467 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:29.312692 systemd[1]: sshd@31-10.0.0.79:22-10.0.0.1:39474.service: Deactivated successfully. Jan 13 20:39:29.314645 systemd[1]: session-32.scope: Deactivated successfully. Jan 13 20:39:29.316380 systemd-logind[1479]: Session 32 logged out. Waiting for processes to exit. Jan 13 20:39:29.332147 systemd[1]: Started sshd@32-10.0.0.79:22-10.0.0.1:39484.service - OpenSSH per-connection server daemon (10.0.0.1:39484). Jan 13 20:39:29.333100 systemd-logind[1479]: Removed session 32. Jan 13 20:39:29.373861 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 39484 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:29.375436 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:29.379797 systemd-logind[1479]: New session 33 of user core. Jan 13 20:39:29.387957 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 13 20:39:29.860569 sshd[4609]: Connection closed by 10.0.0.1 port 39484 Jan 13 20:39:29.861092 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:29.874709 systemd[1]: sshd@32-10.0.0.79:22-10.0.0.1:39484.service: Deactivated successfully. Jan 13 20:39:29.883341 kubelet[2725]: I0113 20:39:29.880181 2725 topology_manager.go:215] "Topology Admit Handler" podUID="98161507-a8a4-4cb8-a847-1066e9754307" podNamespace="kube-system" podName="cilium-d8fzr" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880247 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="apply-sysctl-overwrites" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880258 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="clean-cilium-state" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880267 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e955397-8494-4322-a39c-650b18477d62" containerName="cilium-operator" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880277 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="mount-cgroup" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880286 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="mount-bpf-fs" Jan 13 20:39:29.883341 kubelet[2725]: E0113 20:39:29.880295 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="cilium-agent" Jan 13 20:39:29.883341 kubelet[2725]: I0113 20:39:29.880319 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e955397-8494-4322-a39c-650b18477d62" containerName="cilium-operator" Jan 13 20:39:29.883341 kubelet[2725]: I0113 20:39:29.880329 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad36e6ba-c8b3-45da-a1e8-258425c0c1c7" containerName="cilium-agent" Jan 13 20:39:29.887001 systemd[1]: session-33.scope: Deactivated successfully. Jan 13 20:39:29.892737 systemd-logind[1479]: Session 33 logged out. Waiting for processes to exit. Jan 13 20:39:29.904547 systemd[1]: Started sshd@33-10.0.0.79:22-10.0.0.1:39498.service - OpenSSH per-connection server daemon (10.0.0.1:39498). Jan 13 20:39:29.910695 systemd-logind[1479]: Removed session 33. Jan 13 20:39:29.922717 systemd[1]: Created slice kubepods-burstable-pod98161507_a8a4_4cb8_a847_1066e9754307.slice - libcontainer container kubepods-burstable-pod98161507_a8a4_4cb8_a847_1066e9754307.slice. Jan 13 20:39:29.967829 kubelet[2725]: I0113 20:39:29.967777 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2f7d\" (UniqueName: \"kubernetes.io/projected/98161507-a8a4-4cb8-a847-1066e9754307-kube-api-access-g2f7d\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.967829 kubelet[2725]: I0113 20:39:29.967829 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-hostproc\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968004 kubelet[2725]: I0113 20:39:29.967851 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98161507-a8a4-4cb8-a847-1066e9754307-cilium-ipsec-secrets\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968004 kubelet[2725]: I0113 20:39:29.967870 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-cilium-run\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968004 kubelet[2725]: I0113 20:39:29.967887 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98161507-a8a4-4cb8-a847-1066e9754307-clustermesh-secrets\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968004 kubelet[2725]: I0113 20:39:29.967918 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-bpf-maps\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968004 kubelet[2725]: I0113 20:39:29.967960 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-cilium-cgroup\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968010 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-cni-path\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968030 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-xtables-lock\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968053 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-etc-cni-netd\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968069 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-lib-modules\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968096 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98161507-a8a4-4cb8-a847-1066e9754307-cilium-config-path\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968138 kubelet[2725]: I0113 20:39:29.968113 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-host-proc-sys-net\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968260 kubelet[2725]: I0113 20:39:29.968141 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98161507-a8a4-4cb8-a847-1066e9754307-hubble-tls\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.968260 kubelet[2725]: I0113 20:39:29.968160 2725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98161507-a8a4-4cb8-a847-1066e9754307-host-proc-sys-kernel\") pod \"cilium-d8fzr\" (UID: \"98161507-a8a4-4cb8-a847-1066e9754307\") " pod="kube-system/cilium-d8fzr" Jan 13 20:39:29.973417 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 39498 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:29.975107 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:29.979285 systemd-logind[1479]: New session 34 of user core. Jan 13 20:39:29.986953 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 13 20:39:30.036594 sshd[4622]: Connection closed by 10.0.0.1 port 39498 Jan 13 20:39:30.037163 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:30.047533 systemd[1]: sshd@33-10.0.0.79:22-10.0.0.1:39498.service: Deactivated successfully. Jan 13 20:39:30.049481 systemd[1]: session-34.scope: Deactivated successfully. Jan 13 20:39:30.050994 systemd-logind[1479]: Session 34 logged out. Waiting for processes to exit. Jan 13 20:39:30.057044 systemd[1]: Started sshd@34-10.0.0.79:22-10.0.0.1:39500.service - OpenSSH per-connection server daemon (10.0.0.1:39500). Jan 13 20:39:30.058079 systemd-logind[1479]: Removed session 34. Jan 13 20:39:30.105936 sshd[4628]: Accepted publickey for core from 10.0.0.1 port 39500 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:30.107603 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:30.111212 systemd-logind[1479]: New session 35 of user core. Jan 13 20:39:30.119953 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 13 20:39:30.226323 kubelet[2725]: E0113 20:39:30.226266 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:30.227385 containerd[1490]: time="2025-01-13T20:39:30.226957565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8fzr,Uid:98161507-a8a4-4cb8-a847-1066e9754307,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:30.252247 containerd[1490]: time="2025-01-13T20:39:30.252054737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:30.252247 containerd[1490]: time="2025-01-13T20:39:30.252175885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:30.252247 containerd[1490]: time="2025-01-13T20:39:30.252217504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:30.252502 containerd[1490]: time="2025-01-13T20:39:30.252342700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:30.271973 systemd[1]: Started cri-containerd-8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa.scope - libcontainer container 8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa. Jan 13 20:39:30.296017 containerd[1490]: time="2025-01-13T20:39:30.295967634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8fzr,Uid:98161507-a8a4-4cb8-a847-1066e9754307,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\"" Jan 13 20:39:30.296734 kubelet[2725]: E0113 20:39:30.296712 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:30.298727 containerd[1490]: time="2025-01-13T20:39:30.298687424Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:39:30.313301 containerd[1490]: time="2025-01-13T20:39:30.313228068Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a\"" Jan 13 20:39:30.313873 containerd[1490]: time="2025-01-13T20:39:30.313798884Z" level=info msg="StartContainer for \"bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a\"" Jan 13 20:39:30.343054 systemd[1]: Started cri-containerd-bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a.scope - libcontainer container bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a. Jan 13 20:39:30.378561 containerd[1490]: time="2025-01-13T20:39:30.378468217Z" level=info msg="StartContainer for \"bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a\" returns successfully" Jan 13 20:39:30.390281 systemd[1]: cri-containerd-bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a.scope: Deactivated successfully. Jan 13 20:39:30.426148 containerd[1490]: time="2025-01-13T20:39:30.425944084Z" level=info msg="shim disconnected" id=bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a namespace=k8s.io Jan 13 20:39:30.426148 containerd[1490]: time="2025-01-13T20:39:30.425994289Z" level=warning msg="cleaning up after shim disconnected" id=bdc24d880f6b7fd61323d7afabe8c2c0abe2b135e158d844bcfe8683d6049f6a namespace=k8s.io Jan 13 20:39:30.426148 containerd[1490]: time="2025-01-13T20:39:30.426002685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:30.775329 containerd[1490]: time="2025-01-13T20:39:30.775213311Z" level=info msg="StopPodSandbox for \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\"" Jan 13 20:39:30.775329 containerd[1490]: time="2025-01-13T20:39:30.775312327Z" level=info msg="TearDown network for sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" successfully" Jan 13 20:39:30.775329 containerd[1490]: time="2025-01-13T20:39:30.775323138Z" level=info msg="StopPodSandbox for \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" returns successfully" Jan 13 20:39:30.775932 containerd[1490]: time="2025-01-13T20:39:30.775885759Z" level=info msg="RemovePodSandbox for \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\"" Jan 13 20:39:30.775932 containerd[1490]: time="2025-01-13T20:39:30.775929422Z" level=info msg="Forcibly stopping sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\"" Jan 13 20:39:30.776111 containerd[1490]: time="2025-01-13T20:39:30.775989644Z" level=info msg="TearDown network for sandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" successfully" Jan 13 20:39:30.779522 containerd[1490]: time="2025-01-13T20:39:30.779479917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:39:30.779522 containerd[1490]: time="2025-01-13T20:39:30.779525453Z" level=info msg="RemovePodSandbox \"3b88edc4a078756e2203fc0e1c86d2007e7a84694ed31bf9f5a06d57e2bcbb75\" returns successfully" Jan 13 20:39:30.779955 containerd[1490]: time="2025-01-13T20:39:30.779920709Z" level=info msg="StopPodSandbox for \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\"" Jan 13 20:39:30.780020 containerd[1490]: time="2025-01-13T20:39:30.780012402Z" level=info msg="TearDown network for sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" successfully" Jan 13 20:39:30.780047 containerd[1490]: time="2025-01-13T20:39:30.780022621Z" level=info msg="StopPodSandbox for \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" returns successfully" Jan 13 20:39:30.780439 containerd[1490]: time="2025-01-13T20:39:30.780414219Z" level=info msg="RemovePodSandbox for \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\"" Jan 13 20:39:30.780496 containerd[1490]: time="2025-01-13T20:39:30.780440398Z" level=info msg="Forcibly stopping sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\"" Jan 13 20:39:30.780535 containerd[1490]: time="2025-01-13T20:39:30.780492938Z" level=info msg="TearDown network for sandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" successfully" Jan 13 20:39:30.783754 containerd[1490]: time="2025-01-13T20:39:30.783722188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:39:30.783843 containerd[1490]: time="2025-01-13T20:39:30.783755691Z" level=info msg="RemovePodSandbox \"4e39f5265811d6d10a354df571b21c3e6055276c1ba0c32c56de0d2c5e7a94a2\" returns successfully" Jan 13 20:39:30.860037 kubelet[2725]: E0113 20:39:30.859992 2725 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:39:31.280737 kubelet[2725]: E0113 20:39:31.280693 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:31.283477 containerd[1490]: time="2025-01-13T20:39:31.282561850Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:39:31.296403 containerd[1490]: time="2025-01-13T20:39:31.296192756Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868\"" Jan 13 20:39:31.296860 containerd[1490]: time="2025-01-13T20:39:31.296816211Z" level=info msg="StartContainer for \"1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868\"" Jan 13 20:39:31.326026 systemd[1]: Started cri-containerd-1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868.scope - libcontainer container 1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868. Jan 13 20:39:31.356019 containerd[1490]: time="2025-01-13T20:39:31.355951981Z" level=info msg="StartContainer for \"1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868\" returns successfully" Jan 13 20:39:31.362427 systemd[1]: cri-containerd-1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868.scope: Deactivated successfully. Jan 13 20:39:31.391134 containerd[1490]: time="2025-01-13T20:39:31.391068835Z" level=info msg="shim disconnected" id=1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868 namespace=k8s.io Jan 13 20:39:31.391134 containerd[1490]: time="2025-01-13T20:39:31.391127797Z" level=warning msg="cleaning up after shim disconnected" id=1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868 namespace=k8s.io Jan 13 20:39:31.391134 containerd[1490]: time="2025-01-13T20:39:31.391138237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:32.074443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1583c5829ac9731a859422dcfb215dbaba0f11e4b94923efa9a8260b96143868-rootfs.mount: Deactivated successfully. Jan 13 20:39:32.283227 kubelet[2725]: E0113 20:39:32.283140 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:32.285722 containerd[1490]: time="2025-01-13T20:39:32.285668426Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:39:32.468905 containerd[1490]: time="2025-01-13T20:39:32.468738602Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490\"" Jan 13 20:39:32.469805 containerd[1490]: time="2025-01-13T20:39:32.469771530Z" level=info msg="StartContainer for \"995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490\"" Jan 13 20:39:32.501943 systemd[1]: Started cri-containerd-995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490.scope - libcontainer container 995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490. Jan 13 20:39:32.532542 containerd[1490]: time="2025-01-13T20:39:32.532499715Z" level=info msg="StartContainer for \"995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490\" returns successfully" Jan 13 20:39:32.535003 systemd[1]: cri-containerd-995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490.scope: Deactivated successfully. Jan 13 20:39:32.557700 containerd[1490]: time="2025-01-13T20:39:32.557624043Z" level=info msg="shim disconnected" id=995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490 namespace=k8s.io Jan 13 20:39:32.557700 containerd[1490]: time="2025-01-13T20:39:32.557686290Z" level=warning msg="cleaning up after shim disconnected" id=995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490 namespace=k8s.io Jan 13 20:39:32.557700 containerd[1490]: time="2025-01-13T20:39:32.557696559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:33.074345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-995cf4c0c753c2c216e567b3c48bcfe4083ddb9ea5f713381c7043a8756c4490-rootfs.mount: Deactivated successfully. Jan 13 20:39:33.245310 kubelet[2725]: I0113 20:39:33.245258 2725 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:39:33Z","lastTransitionTime":"2025-01-13T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:39:33.286176 kubelet[2725]: E0113 20:39:33.286147 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:33.287586 containerd[1490]: time="2025-01-13T20:39:33.287533045Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:39:33.304529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972895951.mount: Deactivated successfully. Jan 13 20:39:33.311773 containerd[1490]: time="2025-01-13T20:39:33.311727245Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493\"" Jan 13 20:39:33.312198 containerd[1490]: time="2025-01-13T20:39:33.312168858Z" level=info msg="StartContainer for \"0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493\"" Jan 13 20:39:33.343951 systemd[1]: Started cri-containerd-0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493.scope - libcontainer container 0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493. Jan 13 20:39:33.370875 systemd[1]: cri-containerd-0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493.scope: Deactivated successfully. Jan 13 20:39:33.372899 containerd[1490]: time="2025-01-13T20:39:33.372797838Z" level=info msg="StartContainer for \"0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493\" returns successfully" Jan 13 20:39:33.397785 containerd[1490]: time="2025-01-13T20:39:33.397719490Z" level=info msg="shim disconnected" id=0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493 namespace=k8s.io Jan 13 20:39:33.397785 containerd[1490]: time="2025-01-13T20:39:33.397779673Z" level=warning msg="cleaning up after shim disconnected" id=0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493 namespace=k8s.io Jan 13 20:39:33.397785 containerd[1490]: time="2025-01-13T20:39:33.397791275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:33.768868 kubelet[2725]: E0113 20:39:33.768722 2725 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l9xbb" podUID="9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d" Jan 13 20:39:33.959032 update_engine[1481]: I20250113 20:39:33.958906 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:39:33.959564 update_engine[1481]: I20250113 20:39:33.959412 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:39:33.959768 update_engine[1481]: I20250113 20:39:33.959715 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:39:33.965944 update_engine[1481]: E20250113 20:39:33.965874 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:39:33.965999 update_engine[1481]: I20250113 20:39:33.965970 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:39:33.965999 update_engine[1481]: I20250113 20:39:33.965982 1481 omaha_request_action.cc:617] Omaha request response: Jan 13 20:39:33.966157 update_engine[1481]: E20250113 20:39:33.966120 1481 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:39:33.966198 update_engine[1481]: I20250113 20:39:33.966156 1481 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:39:33.966198 update_engine[1481]: I20250113 20:39:33.966167 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:39:33.966198 update_engine[1481]: I20250113 20:39:33.966176 1481 update_attempter.cc:306] Processing Done. Jan 13 20:39:33.966198 update_engine[1481]: E20250113 20:39:33.966192 1481 update_attempter.cc:619] Update failed. Jan 13 20:39:33.966294 update_engine[1481]: I20250113 20:39:33.966207 1481 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:39:33.966294 update_engine[1481]: I20250113 20:39:33.966215 1481 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:39:33.966294 update_engine[1481]: I20250113 20:39:33.966223 1481 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:39:33.966381 update_engine[1481]: I20250113 20:39:33.966302 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:39:33.966381 update_engine[1481]: I20250113 20:39:33.966329 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:39:33.966381 update_engine[1481]: I20250113 20:39:33.966351 1481 omaha_request_action.cc:272] Request: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: Jan 13 20:39:33.966381 update_engine[1481]: I20250113 20:39:33.966365 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:39:33.966640 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:39:33.966938 update_engine[1481]: I20250113 20:39:33.966647 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:39:33.966938 update_engine[1481]: I20250113 20:39:33.966900 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:39:33.974840 update_engine[1481]: E20250113 20:39:33.974740 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974885 1481 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974897 1481 omaha_request_action.cc:617] Omaha request response: Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974907 1481 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974915 1481 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974923 1481 update_attempter.cc:306] Processing Done. Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974931 1481 update_attempter.cc:310] Error event sent. Jan 13 20:39:33.975002 update_engine[1481]: I20250113 20:39:33.974946 1481 update_check_scheduler.cc:74] Next update check in 41m57s Jan 13 20:39:33.975340 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:39:34.074561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfa649eab64de07cb09283429c40c96e968d302bc03827a05afa4a031642493-rootfs.mount: Deactivated successfully. Jan 13 20:39:34.289998 kubelet[2725]: E0113 20:39:34.289969 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:34.292268 containerd[1490]: time="2025-01-13T20:39:34.292196569Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:39:34.310532 containerd[1490]: time="2025-01-13T20:39:34.310466467Z" level=info msg="CreateContainer within sandbox \"8d2b20e99bb3a06e5ea683c71fa04c86cbdbc4ac6ccd8a835aaf4a777b4d93fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5fdcc1795cd3d4e91654cc8fdde0d960fce8693a6cdc5ab871ef2ceedf456259\"" Jan 13 20:39:34.310929 containerd[1490]: time="2025-01-13T20:39:34.310902538Z" level=info msg="StartContainer for \"5fdcc1795cd3d4e91654cc8fdde0d960fce8693a6cdc5ab871ef2ceedf456259\"" Jan 13 20:39:34.344953 systemd[1]: Started cri-containerd-5fdcc1795cd3d4e91654cc8fdde0d960fce8693a6cdc5ab871ef2ceedf456259.scope - libcontainer container 5fdcc1795cd3d4e91654cc8fdde0d960fce8693a6cdc5ab871ef2ceedf456259. Jan 13 20:39:34.452435 containerd[1490]: time="2025-01-13T20:39:34.452376021Z" level=info msg="StartContainer for \"5fdcc1795cd3d4e91654cc8fdde0d960fce8693a6cdc5ab871ef2ceedf456259\" returns successfully" Jan 13 20:39:34.880855 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:39:35.294747 kubelet[2725]: E0113 20:39:35.294598 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:35.308892 kubelet[2725]: I0113 20:39:35.308130 2725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d8fzr" podStartSLOduration=6.308093954 podStartE2EDuration="6.308093954s" podCreationTimestamp="2025-01-13 20:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:35.307706754 +0000 UTC m=+124.644079231" watchObservedRunningTime="2025-01-13 20:39:35.308093954 +0000 UTC m=+124.644466441" Jan 13 20:39:35.768663 kubelet[2725]: E0113 20:39:35.768610 2725 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l9xbb" podUID="9b9dbf12-9e6b-4ac6-a266-2b70701bbf6d" Jan 13 20:39:36.299715 kubelet[2725]: E0113 20:39:36.297899 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:37.769129 kubelet[2725]: E0113 20:39:37.769087 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:38.005401 systemd-networkd[1412]: lxc_health: Link UP Jan 13 20:39:38.006656 systemd-networkd[1412]: lxc_health: Gained carrier Jan 13 20:39:38.228407 kubelet[2725]: E0113 20:39:38.228342 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:38.301443 kubelet[2725]: E0113 20:39:38.301399 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:39.303714 kubelet[2725]: E0113 20:39:39.303682 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:39.407941 systemd-networkd[1412]: lxc_health: Gained IPv6LL Jan 13 20:39:45.061094 sshd[4634]: Connection closed by 10.0.0.1 port 39500 Jan 13 20:39:45.064142 sshd-session[4628]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:45.069657 systemd[1]: sshd@34-10.0.0.79:22-10.0.0.1:39500.service: Deactivated successfully. Jan 13 20:39:45.072956 systemd[1]: session-35.scope: Deactivated successfully. Jan 13 20:39:45.074793 systemd-logind[1479]: Session 35 logged out. Waiting for processes to exit. Jan 13 20:39:45.091621 systemd-logind[1479]: Removed session 35.